scispace - formally typeset
Search or ask a question

Showing papers in "Marketing Science in 1998"


Journal ArticleDOI
TL;DR: In this paper, the authors developed and estimated a dynamic model of the duration of the provider-customer relationship that focuses on the role of customer satisfaction, and the model is estimated as a left-truncated, proportional hazards regression with cross-sectional and time series data describing cellular customers perceptions and behavior over a 22-month period.
Abstract: Many service organizations have embraced relationship marketing with its focus on maximizing customer lifetime value. Recently, there has been considerable controversy about whether there is a link between customer satisfaction and retention. This research question is important to researchers who are attempting to understand how customers' assessments of services influence their subsequent behavior. However, it is equally vital to managers who require a better understanding of the relationship between satisfaction and the duration of the provider-customer relationship to identify specific actions that can increase retention and profitability in the long run. Since there is very little empirical evidence regarding this research question, this study develops and estimates a dynamic model of the duration of provider-customer relationship that focuses on the role of customer satisfaction. This article models the duration of the customer's relationship with an organization that delivers a continuously provided service, such as utilities, financial services, and telecommunications. In the model, the duration of the provider-customer relationship is postulated to depend on the customer's subjective expected value of the relationship, which he/she updates according to an anchoring and adjustment process. It is hypothesized that cumulative satisfaction serves as an anchor that is updated with new information obtained during service experiences. The model is estimated as a left-truncated, proportional hazards regression with cross-sectional and time series data describing cellular customers perceptions and behavior over a 22-month period. The results indicate that customer satisfaction ratings elicited prior to any decision to cancel or stay loyal to the provider are positively related to the duration of the relationship. The strength of the relationship between duration times and satisfaction levels depends on the length of customers' prior experience with the organization. Customers who have many months' experience with the organization weigh prior cumulative satisfaction more heavily and new information relatively less heavily. The duration of the service provider-customer relationship also depends on whether customers experienced service transactions or failures. The effects of perceived losses arising from transactions or service failures on duration times are directly weighed by prior satisfaction, creating contrast and assimilation effects. How can service organizations develop longer relationships with customers? Since customers weigh prior cumulative satisfaction heavily, organizations should focus on customers in the early stages of the relationship-if customers' experiences are not satisfactory, the relationship is likely to be very short. There is considerable heterogeneity across customers because some customers have a higher utility for the service than others. However, certain types of service encounters are potential relationship "landmines" because customers are highly sensitive to the costs/losses arising from interactions with service organizations and insensitive to the benefits/gains. Thus, incidence and quality of service encounters can be early indicators of whether an organization's relationship with a customer is flourishing or in jeopardy. Unfortunately, organizations with good prior service levels will suffer more when customers perceive that they have suffered a loss arising from a service encounter-due to the existence of contrast effects. However, experienced customers are less sensitive to such losses because they tend to weigh prior satisfaction levels heavily. By modeling the duration of the provider-customer relationship, it is possible to predict the revenue impact of service improvements in the same manner as other resource allocation decisions. The calculations in this article show that changes in customer satisfaction can have important financial implications for the organization because lifetime revenues from an individual customer depend on the duration of his/her relationship, as well as the dollar amount of his/her purchases across billing cycles. Satisfaction levels explain a substantial portion of explained variance in the durations of service provider-customer relationships across customers, comparable to the effect of price. Consequently, it is a popular misconception that organizations that focus on customer satisfaction are failing to manage customer retention. Rather, this article suggests that service organizations should be proactive and learn from customers before they defect by understanding their current satisfaction levels. Managers and researchers may have underestimated the importance of the link between customer satisfaction and retention because the relationship between satisfaction and duration times is very complex and difficult to detect without advanced statistical techniques.

1,900 citations


Journal ArticleDOI
TL;DR: The authors proposed a double-entry mental accounting model to model the relationship between the pleasure of consumption and the pain of paying and draw out their implications for consumer behavior and hedonics, showing that consumers will find it less painful to pay for, and hence will prefer flat-rate pricing schemes such as unlimited Internet access at a fixed monthly price, even if it involves paying more for the same usage.
Abstract: In the standard economic account of consumer behavior the cost of a purchase takes the form of a reduction in future utility when expenditures that otherwise could have been made are forgone. The reality of consumer hedonics is different. When people make purchases, they often experience an immediate pain of paying, which can undermine the pleasure derived from consumption. The ticking of the taxi meter, for example, reduces one's pleasure from the ride. We propose a "double-entry" mental accounting theory that describes the nature of these reciprocal interactions between the pleasure of consumption and the pain of paying and draws out their implications for consumer behavior and hedonics. A central assumption of the model, which we call prospective accounting, is that consumption that has already been paid for can be enjoyed as if it were free and that the pain associated with payments made prior to consumption but not after is buffered by thoughts of the benefits that the payments will finance. Another important concept is coupling, which refers to the degree to which consumption calls to mind thoughts of payment, and vice versa. Some financing methods, such as credit cards, tend to weaken coupling, whereas others, such as cash payment, produce tight coupling. Our model makes a variety of predictions that are at variance with economic formulations. Contrary to the standard prediction that people will finance purchases to minimize the present value of payments, our model predicts strong debt aversion-that they should prefer to prepay for consumption or to get paid for work after it is performed. Such pay-before sequences confer hedonic benefits because consumption can be enjoyed without thinking about the need to pay for it in the future. Likewise, when paying beforehand, the pain of paying is mitigated by thoughts of future consumption benefits. Contrary to the economic prediction that consumers should prefer to pay, at the margin, for what they consume, our model predicts that consumers will find it less painful to pay for, and hence will prefer, flat-rate pricing schemes such as unlimited Internet access at a fixed monthly price, even if it involves paying more for the same usage. Other predictions concern spending patterns with cash, charge, or credit cards, and preferences for the earmarking of purchases. We test these predictions in a series of surveys and in a conjoint-like analysis that pitted our double-entry mental accounting model against a standard discounting formulation and another benchmark that did not incorporate hedonic interactions between consumption and payments. Our model provides a better fit of the data for 60% of the subjects; the discounting formulation provides a better fit for only 29% of the subjects even when allowing for positive and negative discount rates. The pain of paying, we argue, plays an important role in consumer self-regulation, but is hedonically costly. From a hedonic perspective the ideal situation is one in which payments are tightly coupled to consumption so that paying evokes thoughts about the benefits being financed but consumption is decoupled from payments so that consumption does not evoke thoughts about payment. From an efficiency perspective, however, it is important for consumers to be aware of what they are paying for consumption. This creates a tension between hedonic efficiency and what we call decision efficiency. Various institutional arrangements, such as financing of public parks through taxes or usage fees, play into this tradeoff. A producer developing a pricing structure for their product or service should be aware of these two conflicting objectives, and should try to devise a structure that reconciles them.

1,133 citations


Journal ArticleDOI
TL;DR: In this paper, the authors use multiple empirical methods to show that consumers voluntarily and strategically ration their purchase quantities of goods that are likely to be consumed on impulse and that therefore may pose self-control problems.
Abstract: Consumers' attempts to control their unwanted consumption impulses influence many everyday purchases with broad implications for marketers' pricing policies. Addressing theoreticians and practitioners alike, this paper uses multiple empirical methods to show that consumers voluntarily and strategically ration their purchase quantities of goods that are likely to be consumed on impulse and that therefore may pose self-control problems. For example, many regular smokers buy their cigarettes by the pack, although they could easily afford to buy 10-pack cartons. These smokers knowingly forgo sizable per-unit savings from quantity discounts, which they could realize if they bought cartons; by rationing their purchase quantities, they also self-impose additional transactions costs on marginal consumption, which makes excessive smoking overly difficult and costly. Such strategic self-imposition of constraints is intuitively appealing yet theoretically problematic. The marketing literature lacks operationalizations and empirical tests of such consumption self-control strategies and of their managerial implications. This paper provides experimental evidence of the operation of consumer self-control and empirically illustrates its direct implications for the pricing of consumer goods. Moreover, the paper develops a conceptual framework for the design of empirical tests of such self-imposed constraints on consumption in consumer goods markets. Within matched pairs of products, we distinguish relative "virtue" and "vice" goods whose preference ordering changes with whet her consumers evaluate immediate or delayed consumption consequences. For example, ignoring long-term health effects, many smokers prefer regular (relative vice) to light (relative virtue) cigarettes, because they prefer the taste of the former. However, ignoring these short-term taste differences, the same smokers prefer light to regular cigarettes when they consider the long-term health effects of smoking. These preference orders can lead to dynamically inconsistent consumption choices by consumers whose tradeoffs between the immediate and delayed consequences of consumption depend on the time lag between purchase and consumption. This creates a potential self-control problem, because these consumers will be tempted to over consume the vices they have in stock at home. Purchase quantity rationing helps them solve the self-control problem by limiting their stock and hence their consumption opportunities. Such rationing implies that, per purchase occasion, vice consumers will be less likely than virtue consumers to buy larger quantities in response to unit price reductions such as quantity discounts. We first test this prediction in two laboratory experiments. We then examine the external validity of the results at the retail level with a field survey of quantity discounts and with a scanner data analysis of chain-wide store-level demand across a variety of different pairs of matched vice (regular) and virtue (reduced fat, calorie, or caffeine, etc.) product categories. The analyses of these experimental, field, and scanner data provide strong convergent evidence of a characteristic crossover in demand schedules for relative vices and virtues for categories as diverse as, among others, potato chips, chocolate chip cookies, cream cheese, beer, soft drinks, ice cream and frozen yogurt, chewing gum, coffee, and beef andturkey bologna. Vice consumers' demand increases less in response to price reductions than virtue consumers' demand, although their preferences are not generally weaker for vices than for virtues. Constraints on vice purchases are self-imposed and strategic rather than driven by simple preferences. We suggest that rationing their vice inventories at the point of purchase allows consumers to limit subsequent consumption. As a result of purchase quantity rationing, however, vice buyers forgo savings from price reductions through quantity discounts, effectively paying price premiums for the opportunity to engage in self-control. Thus, purchase quantity rationing vice consumers are relatively price in sensitive. From a managerial and public policy perspective, our findings should offer marketing practitioners in many consumer goods industries new opportunities to increase profits through segmentation and price discrimination based on consumer self-control. They can charge premium prices for small sizes of vices, relative to the corresponding quantity discounts for virtues. Virtue consumers, on the other hand, will buy larger amounts even when quantity discounts are relatively shallow. A key conceptual contribution of this paper lies in showing how marketing researchers can investigate a whole class of strategic self-constraining consumer behaviors empirically. Moreover, this research is the first to extend previous, theoretical work on impulse control by empirically demonstrating its broader implications for marketing decision making.

809 citations


Journal ArticleDOI
TL;DR: In this paper, a parsimonious model that accommodates the following consumer and market characteristics is introduced, including the relative attractiveness of retail shopping varies across consumers, the fit with the direct channel varies across product categories, and the strength of existing retail presence in local markets moderates competition.
Abstract: Consumers now purchase several offerings from direct sellers, including catalog and Internet marketers. These direct channels exist in parallel with the conventional retail stores. The availability of multiple channels has significant implications for the performance of consumer markets. The literature in marketing and economics has, however, been dominated by a focus on the conventional retail sector. This paper is an effort toward modeling competition in the multiple-channel environment from a strategic viewpoint. At the outset, a parsimonious model that accommodates the following consumer and market characteristics is introduced. First, the relative attractiveness of retail shopping varies across consumers. Second, the fit with the direct channel varies across product categories. Third, the strength of existing retail presence in local markets moderates competition. Fourth, in contrast with the fixed location of the retail store that anchors its localized market power, the location of the direct marketer is irrelevant to the competitive outcome. The model is first applied in a setting where consumers have complete knowledge of product availability and prices in all channels. In the resulting equilibrium, the direct marketer acts as a competitive wedge between retail stores. The direct presence is so strong that each retailer competes against the remotely located direct marketer, rather than against neighboring retailers. This outcome has implications for the marketing mix of retailers, which has traditionally been tuned to attract consumers choosing between retail stores. In the context of market entry, conditions under which a direct channel can access a local market in retail entry equilibrium are derived. Our analysis suggests that the traditional focus on retail entry equilibria may not yield informative or relevant findings when direct channels are a strong presence. Next, the role of information in multiple-channel markets is modeled. This issue is particularly relevant in the context of direct marketing where the seller can typically control the level of information in the marketplace, sometimes on a customer-by-customer basis e.g., by deciding on the mailing list for a catalog campaign. When a certain fraction of consumers does not receive information from the direct marketer, the retailers compete with each other for that fraction of the market. The retailer's marketing mix has to be tuned, in this case, to jointly address direct and neighboring retail competition. The level of information disseminated by the direct marketer is shown to have strategic implications, and the use of market coverage as a lever to control competition is described. Even with zero information costs, providing information to all consumers may not be optimal under some circumstances. In particular, when the product is not well adapted to the direct channel, the level of market information about the direct option should ideally be lowered. The only way to compete with retailers on a larger scale with a poorly adapted product is by lowering direct prices, which lowers profits. Lowering market information levels and allowing retailers to compete more with each other facilitates a higher equilibrium retail price. In turn, this allows a higher direct price to be charged and improves overall direct profit. On the other hand, when the product is well adapted, increasing direct market presence and engaging in greater competition with the retail sector yields higher returns. The finding that high market coverage may depress profits raises some issues for further exploration. First, implementing the optimal coverage is straightforward when the seller controls the information mechanism, as in the case of catalog marketing. The Internet, in contrast, is an efficient mechanism to transmit information, but does not provide the sellers with such control over the level of market information. A key reason is that the initiative to gather information on the Internet lies largely with consumers. The design and implementation of mechanisms to control aggregate information levels in electronic markets can, therefore, be an important theme for research and managerial interest. Second, direct marketers have traditionally relied on the statistical analysis of customer records to decide on contact policies. The analysis in this paper reveals that these policies can have significant strategic implications as well. Research that integrates the statistical and strategic aspects could make a valuable contribution. The paper concludes with a discussion of issues for future research in multiple-channel markets, including avenues to model competition in settings with multiple direct marketers.

539 citations


Journal ArticleDOI
TL;DR: The idea that consumer shopping behavior as defined by average size of the shopping basket and the frequency of store visits is an important determinant of the store choice decision when stores offer different price formats is advanced.
Abstract: In recent years, the supermarket industry has become increasingly competitive. One outcome has been the proliferation of a variety of pricing formats, and considerable debate among academics and practitioners about how these formats affect consumers' store choice behavior. This paper advances the idea that consumer shopping behavior as defined by average size of the shopping basket and the frequency of store visits is an important determinant of the store choice decision when stores offer different price formats. A recent Wall Street Journal article that summarized the result of Bruno's management switching the chain from EDLP to HILO illustrates the importance of this issue: "The company's price-conscious customers, used to shopping for a fixed basket of goods, stayed away in droves." Thus, the audience for this paper includes practitioners and academics who wish to understand store choices or predict how a change in price format might affect store profitability and the mix of clientele that shop there. This paper attempts to understand the relationship between grocery shopping behavior, retail price format, and store choice by posing and answering the following questions. First, after controlling for other factors e.g., distance to the store, prior experience in the store, advertised specials, do consumer expectations about prices for a basket of grocery products "expected basket attractiveness" influence the store choice decision? This is a fairly straightforward test of the effect of price expectations on store choice. Second, are different pricing formats EDLP or HILO more or less attractive to different types of shoppers? To adequately answer the second question, we must link consumers' category purchase decisions, which collectively define the market basket, and the store choice decision. We study the research questions using two complementary approaches. First, we develop a stylized theory of consumer shopping behavior under price uncertainty. The principal features and results from the stylized model can be summarized as follows. Shoppers are defined in a relative sense as either large or small basket shoppers. Thus, we abstract from the vicissitudes of individual shopping trips and focus on meaningful differences across shoppers in terms of the expected basket size per trip. The shoppers make category purchase incidence decisions and can choose to shop in either an EDLP or a HILO store. Large basket shoppers are shoppers who have a relatively high probability of purchase for any given category, and as such they are more captive to prices across many different categories. The first two propositions summarize the price responsiveness of shoppers. In particular, the large basket shoppers are less responsive to price in their individual category purchase incidence decisions; this makes them more responsive to the expected basket price in their store choice decisions. This key structural implication of the model highlights an asymmetry between response at the category level and response at the store level. The result is quite intuitive; a large basket shopper with less ability to respond to prices in individual product categories will be more sensitive to the expected cost of the overall portfolio the market basket when choosing a store. The final proposition derives the price at which a given shopper will be indifferent between an EDLP and a HILO store. The key insight is that as a shopper increases his or her tendency to become a large basket shopper, the EDLP store can increase its constant price closer and closer to the average price in the HILO store. Conversely, as the shopper becomes more of small basket shopper, the EDLP store must lower its price closer to the deal price in the HILO store. Thus, we have the interesting result that small basket shoppers prefer HILO stores, even at higher average prices. The empirical testing mirrors the development of the consumer theory. We test the implications of the propositions using a market basket scanner panel database. The database includes two years of shopping data for 1,042 households in two separate market areas. We first use household-level grocery expenditures to model the probability that a household is a large or small basket shopper. Subsequently, we estimate purchase incidence and store choice models. We find that after controlling for important factors such as household distance to the store, previous experience in the store, and advertised specials, price expectations for the basket influence store choice. Furthermore, EDLP stores get a greater than expected share of business from large basket shoppers; HILO stores get a greater than expected share from small basket shoppers. Consistent with the implications of the propositions, large basket shoppers are relatively price inelastic in their category purchase incidence decisions and price elastic in their store choice decisions.

528 citations


Journal ArticleDOI
TL;DR: The authors analyzes how manufacturers should coordinate distribution channels when retailers compete in price as well as important nonprice factors such as the provision of product information, free repair, faster check-out, or after-sales service.
Abstract: This paper analyzes how manufacturers should coordinate distribution channels when retailers compete in price as well as important nonprice factors such as the provision of product information, free repair, faster check-out, or after-sales service. Differentiation among retailers in price and nonprice service factors is a central feature of markets ranging from automobiles and appliances to gasoline and is especially observed in the coexistence of high-service retailers and lower price discount retailers. Therefore, how a manufacturer should manage retail differentiation is an important channel management question. Yet, the approach in the existing literature has been to examine channel coordination under the standard "symmetric contracting" assumption that offering a uniform contract to all the retailers in a market will be sufficient for coordination. I bring this assumption into question and ask when is it optimal for the manufacturer to use the channel contract to deliberately induce retail differentiation even if the retailers were ex-ante identical in their cost and other characteristics. The paper identifies the type of channel contracts that can endogenously induce symmetry as opposed to differentiation among retailers. Next, the paper highlights a type of channel conflict that arises from the very nature of retail price-service competition. A manufacturer might find the retailers to be excessively biased towards price competition at the cost of service provision or vice-versa. The paper establishes when a manufacturer is likely to stimulate greater price as opposed to greater service competition among the retailers. The framework that I develop to address these issues highlights the role of two basic types of consumer heterogeneity. Consumers are heterogeneous in their locations as in the spatial models of horizontal differentiation and in their willingness to pay for retail services as in the models of vertical differentiation. The model also uses a natural relationship in retail markets between the travel/time cost incurred by a consumer and her willingness to pay: The more affluent consumers who have a higher willingness to pay for retail services also have a higher cost for their personal time. Given these market features, the paper analyzes the problem faced by a manufacturer who sells to competing retailers. The paper shows that the standard notion in the literature of offering similar contracts to all the retailers is sufficient only in markets with substantial locational differentiation relative to the differences in the willingness to pay. Effective channel management in these markets simply requires mechanisms that ensure that retailer interests are aligned so that they compete by offering a mix of price and service that is desirable from the manufacturer's point of view. However, in markets with small locational differentiation and substantial diversity in consumer willingness to pay, the manufacturer's problem is not just to align retailer interests, but to also use the channel contract to induce the correct level of retail differentiation. This helps the manufacturer to better cater to the diversity in consumer willingness to pay and to prevent the cut-throat competition that the retailers would otherwise have indulged in. The manufacturer can achieve this through the use of menu-based contracts. Menu-based contracts induce differentiated retailer behavior despite the fact that the retailers are not "forced" into accepting different terms of trade. This aspect can be useful in shielding manufacturers from litigation under the Robinson-Patman act. The paper also shows that for relatively high-ticket items retailers tend to be excessively biased towards competing in the provision of retail services. The correlation between consumer willingness to pay for service and travel costs implies that for high-ticket products, the competing retailers will focus on the more service-sensitive customers at the cost of ignoring the price-sensitive consumers in the market. The manufacturer is therefore likely to encourage greater price competition among the retailers. In contrast, for low-ticket items the manufacturer prefers to reduce price competition and encourage greater provision of services. This provides an endogenous rationale for the use of price ceilings versus floors. The basic model is also extended to consider the effect of upstream competition between manufacturers. Under upstream competition, coordinating retail price and service decisions is not always optimal for an individual manufacturer. This extension to manufacturer competition provides a basis for understanding the role of retail price-service differentiation in the context of a channel duopoly. It also shows that a mixed distribution channel a channel in which one manufacturer chooses to be coordinated while the other chooses to be noncoordinated can be an equilibrium in markets with weak brand loyalty.

364 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a model that shows pulsing strategies can generate greater total awareness than the continuous advertising when the effectiveness of advertisement (i.e., ad quality) varies overtime.
Abstract: A key task of advertising media planners is to determine the best media schedule of advertising exposures for a certainbudget. Conceptually, the planner could choose to do continuous advertising (i.e., schedule ad exposures evenly overall weeks) or follow a strategy of pulsing (i.e., advertise in some weeks of the year and not at other times). Previous theoretical analyses have shown that continuous advertising is optimal for nearly all situations. However, pulsing schedules are very common in practice. Either the pract ice of pulsing is inappropriate or extant models have not adequately conceptualized the effects of advertising spending over time. This paper offers a model that shows pulsing strategies can generate greater total awareness than the continuous advertising when the effectiveness of advertisement (i.e., adquality) varies overtime. Specifically, ad quality declines because of advertising wearout during periods of continuous advertising and it restores, due to forgetting effects, during periods of no advertising. Such dynamics make it worth-while for advertisers to stop advertising when ad quality becomes very low and wait for ad quality to restore beforestarting the next "burst" again, as is common in practice. Based on the extensive behavioral research on advertising repetition and advertising wearout, we extend the classical Nerlove and Arrow (1962) model by incorporating the notions of repetition wearout, copy wearout, and ad quality restoration. Repetition wearout is a result of excessive frequency beca use ad viewers perceive that there is nothing new to be gained from processing the ad, they withdraw their attention, or they become unmotivated to react to advertising information. Copy wearout refers to the decline inad quality due to passage of time independent of the level of frequency. Ad quality restoration is the enhancement of ad quality during media hiatus as a consequence of viewers forgetting the details of the advertised messages, thus making ads appear "like new" when reintroduced later. The proposed model has the property that, when wearout effects are present, a strategy of pulsing is superior to continuous advertising even when the advertising response function is concave. This is illustrated by a numerical example that compares the total awareness generated by a single concentrated pulse of varying duration (blitz schedules) and continuous advertising (the even schedule). This property can be explained by the tension between the pressure to spend the fixed media budget quickly to avoid copy wearout and the opposing pressure to spread out the media spending over time to mitigate repetition wearout. The proposed model is empirically tested by using brand level data from two advertising awareness tracking studies that also include the actual spen ding schedules. The first data set is for a major cereal brand, while the other is for a brand of milk chocolate. Such advertising tracking studies are now a common and popular means for evaluating advertising effectiveness in many markets (e.g., Millward Brown, NarketMind). In the empirical tests, the model parameters are estimated by using the Kalman filter procedure, which is eminently suited for dynamic models because it attends to the intertemporal dependencies in awareness build-up and decay via the use of conditional densities. The estimated parameters are statistically significant, have the expected signs, and are meaningful from both theoretical and managerial viewpoints. The proposed model fits both the data sets rather well and better than several well-known advertising models, namely, the Vidale-Wolfe, Brandaid, Litmus, and Trackermodels, but not decisively better than the Nerlove-Arrow model. However, unlike the Nerlove-Arrow model, the proposed model yields different total awareness for different strategies of spending the same fixed budget, thus allowing media planners to discriminate among several media schedules. Given the empirical support for the model, the paper presents an implementable approach for utilizing it to evaluate large numbers of alternative media schedules and determine the best set of media schedules for consideration in media planning. This approach is based on an algorithm that combines a genetic algorithm with the Kalman filter procedure. The paper presents the results of applying this approach in the case studies of the cereal and milk chocolate brands. The form of the best advertising spending strategies in each case was a pulsing strategy, and there were many schedules that were an improvement over the media schedule actually used in each campaign.

306 citations


Journal ArticleDOI
TL;DR: In this paper, the authors measure the covariance of both observed (linked to measured characteristics of households) and unobserved heterogeneity in marketing mix sensitivity across multiple categories and find substantial and statistically important correlations ranging from.32 for price sensitivities to.58 for feature sensitivity.
Abstract: Differences between consumers in sensitivity to marketing mix variables have been extensively documented in the scanner panel data. All studies of consumer heterogeneity focus on a specific category of products and ignore the fact that the purchase behavior of panel households is often observed simultaneously in multiple categories. If sensitivity to marketing mix variables is a common consumer trait, then one should expect to see similarities in sensitivity across multiple categories. The goal in this paper is to measure the covariance of both observed (linked to measured characteristics of households) and unobserved heterogeneity in marketing mix sensitivity across multiple categories. Measurement of correlation in sensitivities across categories will serve to guide the interpretation of the literature on household heterogeneity. If there is a large correlation, one can be more confident that sensitivity to marketing variables is a fundamental household pr operty and not simply a category-specific anomaly. Detection of correlation in sensitivities across categories requires an appropriate methodology that can handle the high dimensional covariance structures and properly account for uncertainty in estimatio n. For example, a simple approach might be to fit a brand choice model to each of the available categories in turn, ignoring the data in the other categories. For each category, household parameter estimates could be obtained for the parameters corresponding to price, display, and feature sensitivity. These parameter estimates could be viewed as data and the correlations across categories could be computed. Such a procedure could induce a downward bias in the estimation of correlation dueto the independent sampling errors, which are present in each parameter estimate. We develop a hierarchical model structure that introduces an explicit correlation structure across categories and utilizes the data in multiple categories at the same time. To reduce the size of the covariance matrix, we use a variance components approach. We introduce household-specific demographic variables to decompose the correlation across categories into that which can be ascribed to observable and unobservable sources. Shopping behavior variables such as shopping frequency and market basket size as well as intensity of shopping in a category are also included in the model. Using data on five categories, we find substantial and statistically important correlations ranging from .32 for price sensitivities to .58 for feature sensitivity. These correlations are much larger than the correlations obtained with the state-of-the-art techniques available prior to our work. We attribute our ability to detect substantial correlations to our method, which involves the joint use of multiple category data in a parsimonious and efficient manner. Unlike previous studies with panel data, household demographic variables are found to be strongly related to price sensitivity. Higher income households are less price sensitive and large families are more price sensitive. Shopping behavior variables are also important in explaining price sensitivity. Households that visit the store often are more price sensitive. Households with larger market baskets are less price sensitive, confirming the view of Bell and Lattin (1998). Heavy user households tend to be both less price sensitive and less display sensitive. The evidence presented here of substantial correlations validates, in part, the notion that sensitivity to marketing mix variables is a consumer trait and is not unique to specific product categories. It also opens the possibility of using information across categories in making inferences about consumer brand preference and marketing mix sensitivity, providing a richer source of information for target marketing.

274 citations


Journal ArticleDOI
TL;DR: In this article, a model of customer arrivals and choice between goods that explicitly allows for possible product substitution and lost sales when a customer faces a stock-out is developed in the context of retail vending, an industry that accounts for a sizable part of the retail sales of many consumer products.
Abstract: The occurrence of temporary stock-outs at retail is common in frequently purchased product categories. Available empirical evidence suggests that when faced with stock-outs, consumers are often willing to buy substitute items. An important implication of this consumer behavior is that observed sales of an item no longer provide a good measure of its core demand rate. Sales of items that stock-out are right censored, while sales of other items are inflated because of substitutions. Knowledge of the true demand rates and substitution rates is important for the retailer for a variety of category management decisions such as the ideal assortment to carry, how much to stock of each item, and how often to replenish the stock. The estimated substitution rates can also be used to infer patterns of competition between items in the category. In this paper we propose methods to estimate demand rates and substitution rates in such contexts. We develop a model of customer arrivals and choice between goods that explicitly allows for possible product substitution and lost sales when a customer faces a stock-out. The model is developed in the context of retail vending, an industry that accounts for a sizable part of the retail sales of many consumer products. We consider the information set available from two kinds of inventory tracking systems. In the best case scenario of a perpetual inventory system in which times of stock-out occurrence and cumulative sales of all goods up to these times are observed, we derive Maximum Likelihood Estimates (MLEs) of the demand parameters and show that they are especially simple and intuitive. However, state-of-the-art inventory systems in retail vend-ing provide only periodic data, i.e., data in which times of stock-out occurrence are unobserved or "missing." For these data we show how the Expectation-Maximization (EM) algorithm can be employed to obtain the MLEs of the demand parameters by treating the stock-out times as missing data. We show an application of the model to daily sales and stocking data pooled across multiple beverage vending machines in a midwestern U.S. city. The vending machines in the application carry identical assortments of six brands. Since the number of parameters to be estimated is too large given the available data, we discuss possible restrictions of the consumer choice model to accomplish the estimation. Our results indicate that demand rates estimated naively by using observed sales rates are biased, even for items that have very few occurrences of stock-outs. We also find significant differences among the substitution rates of the six brands. The methods proposed in our paper can be modified to apply to many nonvending retail settings in which consumer choices are observed, not their preferences, and choices are constrained because of unavailability of items in the choice set. One such context is in-store grocery retailing, where similar issues of information availability arise. In this context an important issue that would need to be dealt with is changes in the retail environment caused by retail promotions.

266 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study the problem of product line design for a distribution channel with the manufacturer, the retailer or several competing retailers, and the consumers, and show that the best strategy for the manufacturer is to increase the differences in the products being supplied (in comparison to the direct selling/coordinated channel case).
Abstract: When designing a product line, a manufacturer is often aware that it doesnot control the ultimate targeting of the products in the line to the different consumer segments. While the manufacturer scan attempt to influence the target customers through communications in appropriate media, product design, and the choice of channels of distribution, the ultimate targeting ismade by a retailer, which might only care about its own interests, and is fully in control of interactions with customers, including how the product is sold and displayed. This occurrence is widespread in numerous markets, for example, frequently purchased consumer products, home appliances, personal computers, automobiles, etc. The audience for this paper includes practitioners and academics who want to better understand how a manufacturer selling through an intermediary can better induce this intermediary to have a targeting strategy consistent with the manufacturer's intentions and be willing to carry the full product line. The paper attempts to find what are the main issues a manufacturer selling through a distribution channel has to worry about when designing the product line. The problem of the product line design for a distribution channel is modeled with the manufacturer, the retailer or several competing retailers, and the consumers. In this way all the three levels of the distribution system are included. The model can be summarized as follows. The manufacturer decide show many products to have in the line and the physical characteristics of each product, quality. Each product may or may not be targeted at a different market segment. The manufacturer decidesas well how many market segments to try to target and the prices to charge the retailer for each type of product. Given the product line being offered by the manufacturer, the retailer (or competing retailers) decides which products to carry, the market segments that are going to be targeted, which product to target to each segment, and the prices being charged the consumers for each product. The consumer market is composed of different market segments that value quality differently: Some market segments are willing to pay more for quality than other market segments. The paper presents the results for two market segments, but a greater number of market segments can also be accounted for. We characterize the equilibrium targeting strategies of the manufacturer and retailer (or competing retailers) in terms of number of products in the line, the physical characteristics of each product, the prices charged by the manufacturer for each product, the consumer prices charged by the retailer for each product, and the product bought by each market segment. We compare the results with the coordinated channel outcome, where the manufacturer and the retailer work together to maximize the overall channel profits. The results are related to the other coordination problems previously studied in the literature (for example, the standard"double marginalization " effect of higher prices reducing demand) in the sense that the retailer makes decisions caring only about itsown profits and not the overall channel profits. The paper shows that, if possible, the best strategy for the manufacturer is to increase the differences in the products being supplied (in comparison to the direct selling/coordinated channel case). If the manufacturer is not able to increase these differences, it then elects to price the product line such that some of the consumer segments end up not being served. The intuition for this result is that the manufacturer, by increasing the differences among the different products, is still making major profits on the high end segments, while getting some positive profits from the low end segments and guaranteeing that the retailer actually targets the different products to the consumer segments intended by the manufacturer. Were the manufacturer not to increase the differencesamong the different products being offered, the retailer would only target the higher end consumer segments, because also targeting the lower end segments would involve losing too many rents on the higher end segments. Another way of seeing the problem is that the channel pricing distortions increase the cannibalization forces across the product line. The manufacturer triesto compensate for this by increasing the product differentiation across the line. If increasing the differences among the different products being offered is not possible, the manufacturer then drops the low end consumer segments and concentrates on the high end of the market (which is more profitable). The unit margins of both the retailer and manufacturer are also shown to be increasing with the quality level of the product.

254 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the relationship between product line structure and brand equity and found that the presence of "premium" or high-quality products in a product line enhance brand equity.
Abstract: This paper addresses the question of how the vertical structure of a product line relates to brand equity. Does the presence of "premium" or high-quality products in a product line enhance brand equity? Conversely, does the presence of "economy" or low-quality products in a product line diminish brand equity? Economists and marketing researchers refer to variation in quality levels of products within a category as "vertical" differentiation, whereas variation in the function or "category" of the products is referred to as "horizontal" differentiation. Much of the existing research on the relationship between product line structure and brand equity has focused on the horizontal structure of the product lineand has been primarily concerned with brand extensions-what happens when the product line of a brand is extended horizontally into new categories? Researchers have been concerned primarily with how the extension fares, but the effect of the extension on the core products is also important. There is an analogous question of what happens when the product line of a brand is extended vertically, either "up market" or "down market." This question of vertical extensions is part of the more general issue of how the vertical structure of a product line relates to brand equity. The specific research questions addressed in this paper are: (1) do "premium" or high-quality products enhance the brand equity associated with the other products in the line? (2) Conversely, do "economy" or low-quality products diminish the brand equity associated with the other products in the line? These research questions are relevant to three managerial issues in product-line strategy. First, what are the costs and benefits of including "down market" products within a brand? Second, what are the implications of including high-end models within a brand? Third, when should high-end and low-end products be offered under an existing brand umbrella and when should these products be offered under separate brands? We address these research questions empirically through an analysis of the models and brands within the U.S. mountain bicycle industry. We use price premium above that which can be explained by the physical characteristics of the bicycle as a metric for brand equity. We then test several hypotheses related to the relationship between extension of the product line upward and downward and the price premium commanded by the brand. We further support this analysis with a simple laboratory experiment. The analysis reveals that price premium, in the lower quality segments of the market, is significantly positively correlated with the quality of the lowest-quality model in the brand's product line; and, that for the upper quality segments of the market, price premium is also significantly positively correlated with the quality of the highest-quality model in the brand's product line. The results of the analysis are supported by the outcome of an experiment in which 63 percent of the subjects preferred a product offered by a high-end brand to the equivalent product offered by a low-end competitor. These results imply that managers wishing only to maximize the equity of their brands would offer only high-quality products and avoid offering low-quality products. However, this result must be moderated by the overall objective of maximizing profits. Maximizing profits is likely to involve a tradeoff between preserving high brand equity (and therefore high margins) and pursuing the volume typically located in the lower end of the market. One of the most significant implications of this research is that product line managers need to be mindful not just of the incremental cannibalization or stimulation of sales of products that are immediate neighbors of an extension to the product line, but also the effect of such an extension on the brand equity in other, possibly quite different, parts of the product line.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the effect of channel coordination on product substitutability in a four-stage game with two manufacturers and two retailers, where the intrachannel contracts are linear and observable and manufacturers make investments in process improvements to reduce their production costs.
Abstract: In this paper we analyze the joint implications of two effects: (a) inserting independent profit-maximizing retailers into thechannel system provides "buffering" to the manufacturers from price competition when their products are highly substitutable and intrachannel contracts are observable (as shown by McGuire and Staelin 1983 under the assumption of constant marginal production costs), and, (b) lack of channel coordination results in a reduction in manufacturer's incentives to invest in efforts to reduce production costs (as shown by Jeuland and Shugan 1983 for the case of bilateral monopoly). We show that both these results are robust in the sense that the first holds even in the presence of the vertical externality of manufacturer's effort reduction in a noncoordinated channel, and the second holds regardless of the degree of substitutability between the competing channel's products. Specifically, we analyze a four-stage game with two manufacturers and two retailers, where the intrachannel contracts are linear and observable and manufacturers make investments in process improvements to reduce their production costs. We find that the optimal channel structure decision depends on interactions between two parameters: the degree of substitutability between products and the level of investments required to achieve production cost reduction. These parameters represent what have been widely interpreted in the management literature as the two primary "generic strategies" that most organizations follow in order to gain competitive advantage: cost leadership and product differentiation (Porter 1980). Thus, our analysis brings out the strategic and interdisciplinary nature of the channel structure decision that can significantly affect firm profitability. Our main results are as follows. First, we find that decentralized, noncoordinated channels appear as more profitable equilibrium than integration (or perfectly coordinated channels) at high product substitutability even when process innovation dimension is accounted for, in agreement with the literature. However, the range of substitutability over which decentralization is an equilibrium strategy is smaller the easier it is to reduce production costs. Intuitively, the easier the cost reduction, the larger the cost penalty that the channel incurs as a result of not coordinating investment and pricing decisions between channel members, and thus smaller the range over which decentralization is an equilibrium. This implies that there is an explicit tradeoff between efficiency and strategic incentives in distribution channel design. Second, we show that decentralized manufacturers invest less in process innovation than integrated manufacturers do, regardless of the structure of the competing channel and the degree of substitutability between products. Consequently, a decentralized channel has higher costs, charges higher prices, and produces lower quantities than an integrated channel does. Moreover, these differences get larger the easier the cost reduction. The effect on manufacturer profits, however, is not that clear. Manufacturers make higher profits by decentralizing if products are highly substitutable, inagreement with McGuire and Staelin (1983) and Coughlan and Wernerfelt (1989). However, we also find that the relative profitability of decentralization at high substitutability (and of integration at low substitutability) increases the easier the cost reduction. Moreover, the range of substitutability over which decentralization is more profitable than integration is itself larger the easier the cost reduction (though decentralization is an equilibrium strategy over a smallerrange). Thus, process innovation accentuates the profit difference between integrated and decentralized channels andmakes the Prisoner's Dilemma situation worse in the choice of distribution channel structure. Finally, we analyze two examples of coordinated decision making in a channel: a divisional integrated system and franchising. In the first case, we find that decentralization can emerge as a unique (and more profitable) equilibrium at high product substitutability, in contrast to McGuire and Staelin(1983). In the second case, we find that decentralization is not always a unique equilibrium and it is not always more profitable than integration, in sharp contrast to the results by Coughlan and Wernerfelt (1989). Thus, franchising does not provide a sure way of achieving channel coordination when marginal production costs are not constant. In sum, this paper highlights the importance of simultaneously considering both the horizontal and the vertical dimensions of interorganizational relations on one hand and, on the other, paying attention to cross-functional interactions across marketing and operational decisions to better understand the underlying incentives that shape firm and market structures; conventional focus of marketing on demand side effects and of operations on cost side effects can lead to suboptimal decisions.

Journal ArticleDOI
TL;DR: In this article, a hierarchical Bayes continuous random effects model that integrates consumer choice and quantity decisions such that individual-level parameters can be estimated is presented. But, the model is not suitable for the analysis of consumer preferences and consumption.
Abstract: Product design, pricing policies, and promotional activities influence the primary and secondary demand for goods and services. Brand managers need to develop an understanding of the relationships between marketing mix decisions and consumer decisions of whether to purchase in the product category, which brand to buy, and how much to consume. Knowledge about factors most effective in influencing primary and secondary demand of a product allows firms to grow by enhancing their market share as well their market size. The purpose of this paper is to develop an individual level model that allows an investigation of both the primary and secondary aspects of consumer demand. Unlike models of only primary demand or only secondary demand, this more comprehensive model offers the opportunity to identify changes in product features that will result in the greatest increase in demand. It also offers the opportunity to differentially target consumer segments depending upon whether consumers are most likely to enter the market, increase their consumption level, or switch brands. In the proposed hierarchical Bayes model, an integrative framework that jointly models the discrete choice and continuous quantity components of consumer decision is employed instead of treating the two as independent. The model includes parameters that capture individual specific reservation value, attribute preference, and expenditure sensitivity. The model development is based upon the microeconomic theory of utility maximization. Heterogeneity in model parameters across the sample is captured by using a random effects specification guided by the underlying microeconomic model. This requires that some of the effects are strictly positive. This is accommodated through the use of a gamma distribution of heterogeneity for some of the parameters. A normal distribution of heterogeneity is used for the remaining parameters. Gibbs sampling is used to estimate the model. The key methodological contribution of this paper is that we show how to specify a hierarchical Bayes continuous random effects model that integrates consumer choice and quantity decisions such that individual-level parameters can be estimated. Individual level estimates are desirable because insights into primary demand involve nonlinear functions of model parameters. For example, consumers not in the market are those whose utilities for the choice alternatives fall below some reservation value. The proposed methodology yields individual specific estimates of reservation values and expenditure sensitivity, which allow assessment of the origins of demand other than the switch ing behavior of consumers. The methodology can also be used to help identify changes in product features most likely to bring new customers into a market. Our work differs from previous research in this area as we lay the framework needed to obtain individual-level parameter estimates in a continuous random effects model that integrates choice and quantity. The methodology is demonstrated with survey data collected about consumer preferences and consumption for a food item. For the data available, a large response heterogeneity was observed across all model parameters. In spite of limited data available at the individual level, a majority of the individual level estimates were found to be significant. Predictive tests demonstrated the superiority of the proposed model over existing latent class and aggregate models. Particularly, significant gains in predictive accuracy were observed for the "no-buy" behavior of the respondents. These gains demonstrate that by structurally linking the choice and quantity models results in a more accurate characterization of the market than existing finite mixture approaches that model choice and quantity independently. We show that our joint model makes more efficient use of the available data and results in better parameter estimates than those that assume independence. Finally, the individual level demand analysis is illustrated through a simple example involving a $1.00 price cut. We demonstrate practical usefulness of the model for targeting by developing the demographic, attitudinal, and behavioral profiles of consumer groups most likely to increase consumption, enter the market, or switch brands because of a price cut decision.

Journal ArticleDOI
TL;DR: In this article, the authors use the hazard function approach to predict the probability of a line extension in the perceptual space of a household, based on the degree of inertia and variety seeking.
Abstract: Previous research on state dependence indicates that a brand's purchase probabilities vary over time and depend on the levels of inertia and variety seeking and on the identity of the previously purchased brand. Brand-choice probabilities obtained from models such as the logit and the probit are, however, fixed over time, conditional on the previous brand purchased and on the levels of marketing variables. Consequently, state dependence has largely been studied as a time-invariant phenomenon in brand-choice models, with the levels of inertia and variety seeking assumed to be constant over time. To account for the time-varying nature of state dependence would require a model in which brand-switching probabilities depend upon interpurchase times. One modeling framework that can account for this dependence is based on the hazard function approach. The proposed approach works as follows. All other factors being equal, an inertial household purchasing a brand on a particular occasion is most likely to repurchase that brand on the next occasion. If the household switches, it will be to a brand located perceptually close, in attribute space, to the previously purchased brand. In other words, an inertial household has the highest switching hazard for the same origin and destination brands, with a progressively lower hazard rate for brands perceptually located farther and farther away from the origin brand. The amount by which the hazard is lowered depends upon the perceptual distance and the inertia level of the household. On the other hand, if the household is variety seeking, the most likely brand purchased would be a brand located farthest away from the previously purchased brand in attribute space. In other words, the hazard rate of repurchase is the lowest, with the rate increasing with the distance of the destination brand from the origin brand and the level of that household's variety-seeking tendency. The effects of inertia and variety seeking are, therefore, incorporated at the attribute level into a brand-purchase timing model. In doing so, we attempt to provide greater insight into the nature of state dependence in models of purchase timing. Our model and estimation procedure will enable us to distinguish between households that are inertial and those that are variety prone. In addition to accounting for state dependence, the model also accounts for the effects of unobserved heterogeneity among households in their brand preferences and in their sensitivities to marketing activities. A majority of studies in marketing using the hazard function approach to investigate purchase timing have not accounted for heterogeneity in marketing-mix effects. The study integrates recent methods that incorporate the effects of inertia and variety seeking in brand-choice models with a semi-Markov model of purchase timing and brand switching. The proposed model enables us to 1 infer market structure via a perceptual map for the sample households, and 2 investigate implications for the introduction of a line extension. We provide empirical applications of the proposed method using three different household-level scanner panel data sets. We find that differing levels of inertia and variety seeking characterize the three data sets. The findings are consistent with prior beliefs regarding these categories. In addition, our results indicate that the nature of interbrand purchase timing behavior depends upon the extent of inertia or variety seeking in the data. We are also able to characterize the structure of the three product markets studied. This provides implications for interbrand rivalry in the market. Further, we demonstrate how the model and results can be used to predict the location of a line extension in the perceptual space of households. Finally, we obtain implications for the timing of brand promotions.

Journal ArticleDOI
TL;DR: In recent years, the supermarket industry has become increasingly competitive. One outcome has been the proliferation of a variety of pricing formats, and considerable debate among academics and pr... as mentioned in this paper.
Abstract: In recent years, the supermarket industry has become increasingly competitive. One outcome has been the proliferation of a variety of pricing formats, and considerable debate among academics and pr...

Journal ArticleDOI
TL;DR: In this paper, the authors analyze the role of incentive and monitoring in the design of sales force control systems and find that incentive-laden compensation plans are generally more appropriate for individuals who are risk-tolerant and entrepreneurial in nature.
Abstract: Our primary objective in this paper is to analyze a framework that simultaneously examines the role of both monitoring and incentives in the design of sales force control systems. Previous research has focused exclusively on the role of incentives in directing sales force effort. We build on the structure provided by the past work and analyze an agency-thoeretic model in which a sales person generates wealth for the firm by expending effort across two dimensions, namely, internal and external. We assume that effort in the internal dimension can be monitored relatively cheaply whereas effort in the external dimension can be monitored only at infinite cost. We then analyze the following two scenarios: (i) a pure incentives world where in both effort dimensions are governed through the use of incentive pay, and (ii) a monitoring and incentives world wherein the internal dimension is monitored and the external dimension is governed through the use of incentive pay. In addition to modeling the notion of partial monitoring in this manner, we also explicitly allow the firm to choose the level of risk aversion desired in its salesperson. Of course, sales people who are relatively risk-tolerant command higher reservation wages; consequently, such sales people are likely to be valuable only to those firms that emphasize incentive pay in their control systems. Our analysis across the two scenarios helps us to demonstrate the implications and value of introducing monitoring into the control structure. Specifically, we find that monitoring allows the firm to decrease the weight placed on incentives and hire a relatively risk-averse salesperson from the salesforce labor market. These actions, in turn, permit the firm to reduce the risk-premium and the reservation wage offered to the salesperson. In direct contrast to these monetary savings, however, we find that an adverse side effect of monitoring is that it induces salespeople to overemphasize the effort devoted to the monitored dimension while underemphasizing the effort devoted to the nonmonitored dimension. This adverse effect of monitoring notwithstanding, we find that the overall benefit of increased monitoring is that it allows the firm to lower the a mount of total compensation paid to the salesperson. These analytical findings are consistent with the prescriptions found in the popular business press where it is often stated that compensation plans that emphasize incentive pay are characterized by independence in managing activities (lack of monitoring) as well as high income potential. These findings are also consistent with the popular wisdom that incentive-laden compensation plans are generally more appropriate for individuals who are risk takers and entrepreneurial in nature. We also delineate the conditions where monitoring can improve on the profits obtained in a pure incentives world. Specifically, we find that monitoring can prove to be most valuable when the importance of internal activities is highand the level of incentives is low. Finally, we conclude by conducting a sensitivity analysis to examine the robustness of our results to the specifications we utilize in our modeling efforts. Overall, we view the main contribution of our research efforts as one of explicitly delineating the tradeoffs associated with the use of monitoring and incentives in the design of salesforce control systems. As such, our paper should be of interest to academics and practitioners interested in the design of salesforce control systems.

Journal ArticleDOI
TL;DR: In this paper, the authors present a conceptual framework to describe the commercial zapping phenomenon and use it to identify factors that influence channel switching during commercials, and explore the impact of advertising content on zapping and find that the presence of a brand differentiating message in a commercial causes a statistically significant decrease in zapping probabilities.
Abstract: We present a conceptual framework to describe the commercial zapping phenomenon and use it to identify factors that influence channel switching during commercials. Drawing on previous research, published reports of practitioner gut feel, interventions used by advertisers to reduce channel switching, and proprietary studies reported in the published literature, we describe how these variables might potentially affect the decision to zap a commercial. We use a latent class approach to model the impact of the identified factors on two aspects of the switching decision-whether or not a commercial is zapped (modeled with a binary logit model) and, conditional on a zap having taken place, the number of seconds that the commercial was watched before being zapped (modeled within the proportional hazards framework). The model is estimated on telemeter data on commercial viewing in two categories (spaghetti sauce and window cleaners) obtained from a single-source, household scanner panel. The results from the empirical analysis show that households can be grouped into two segments. The first, which consists of about 35% of households in the sample, is more zap-prone than the second. For this"zapping segment, " the probability of zapping a commercial is lower for households who make more purchases in the product category. Also, zapping shows a J-shaped response to previous exposures to the commercial, with the associated zapping elasticity reaching its minimum value at around 14 exposures and increasing rapidly thereafter. This finding suggests that advertisers should be cautious not to use media schedules that have excessive media weight or that emphasize frequency over reach. We found that zapping probabilities for ads aired around the hour and half-hour marks to be significantly higher than for other pod locations. Based on these results, we argue that prices for advertising pods located around the hour/half-hour marks should be between 5% to 33% lower than those in the remaining portion of the program. We explore the impact of advertising content on zapping and find that the presence of a brand differentiating message in a commercial causes a statistically significant decrease in zapping probabilities. While the magnitude of this effect is small, the finding suggests that it may be helpful to include qualitative variables in future models of advertising response. We propose the expected proportion of time that an ad is watched as a benchmark to compare 15-second and 30-second ad formats from a zapping standpoint. We found no significant differences between the two formats on this dimension. Our analysis also shows that, due to the impact of previous exposures on zapping, the use of 15-second ads runs a greater risk of reaching the threshold exposure level beyond which zapping probabilities start to increase. This implies that while managers must be cognizant of the risks of overexposure for any ad, it is especially important in the case of the shorter, 15-second ad format.

Journal ArticleDOI
TL;DR: Many service organizations have embraced relationship marketing with its focus on maximizing customer lifetime value as mentioned in this paper, and there has been considerable controversy about whether there is a link between relationship marketing and customer lifetime values.
Abstract: Many service organizations have embraced relationship marketing with its focus on maximizing customer lifetime value. Recently, there has been considerable controversy about whether there is a link...

Journal ArticleDOI
TL;DR: In this article, the authors investigate how much information sale signs reveal and demonstrate that sale signs are self-regulating in a game-theoretic model, in which competing stores sell imperfect substitutes in two-period overlapping seasons and new customers arrive each period and decide whether to purchase immediately or delay and return in the future.
Abstract: Sale signs increase demand. The apparent effectiveness of this simple strategy is surprising; sale signs are inexpensive to produce and stores generally make no commitment when using them. As a result, they can be placed on any products, and as many products, as stores prefer. If stores can place sale signs on any or all of their products, why are they effective? We offer an explanation for the effectiveness of sale signs by arguing that they inform customers about which products have relatively low prices, thus helping customers decide whether to purchase now, visit another store, or perhaps return to the same store in the future. This explanation raises two additional issues. First, why do stores prefer to place sale signs on products that are truly low priced (stores could use sale signs to increase demand for any of their products)? Second, how many sale signs should a store use; should they limit sale signs to just their relatively low priced products or should they also place them on some of their higher priced products? The paper addresses each of these questions and in doing so investigates how much information sale signs reveal. Our arguments are illustrated using a formal game-theoretic model in which competing stores sell imperfect substitutes in two-period overlapping seasons. Stores choose price and sale sign strategies and new customers arrive each period and decide whether to purchase immediately or delay and return in the future (to the same store or a different store). Customers who delay purchasing risk that the product will not be available in the following period and incur an additional transportation cost when they return. Two factors balance these costs. First, customers correctly anticipate that the price will be lower if the product is available in the next period. Second, customers who return to a different store may find a product that better suits their needs. In deciding how to respond, customers use price and sale sign cues to update their expectations about which products will be available in the next period. Stores' sale sign and price strategies are entirely endogenous in the model, as is the impact of sale signs on demand. In our discus sion we highlight the information revealed bysale signs, including the source of its credibility, its sensitivity to the number of sale signs that are used, and the resulting shift in customer demand. We point to two key results. First, we show that the underlying signal is self-fulfilling: if customers believe that products with sale signs are more likely to be relatively low priced, then firms prefer to place sale signs on lower priced products. Second, we demonstrate that sale signs are self-regulating. Stores may introduce noise by placing sale signs on some more expensive products. However, if customers' price expectations are sensitive to the number of products that have sale signs, this strategy is not without cost. Using additional sale signs may reduce demand for other products that already have sale signs. Our model is unique in several respects. First, we describe how stores use multiple signals to communicate with customers and recognize that customers vary in how much they learn from each signal. Price alone resolves uncertainty for some customers, but other customers use both prices and sale signs to resolve their uncertainty. Second, although previous signaling models have recognized that signals may be noisy (not always accurate), noise in these signals is typically exogenous, resulting from uncontrolled environmental distortions. In our model, stores endogenously choose to introduce noise so that sale signs only partially reveal which products are discounted. Our explanations are supported by several examples. Although we focus on fashion products, our findings have application to any market in which customers are uncertain about relative price levels.

Journal ArticleDOI
TL;DR: In this paper, the authors developed an MNP model for conjoint choice analysis and compared it with the Independent Probit IP model and the random coefficients RC model, and found that the RC model accommodates correlations among choice sets that are caused by background contrast effects, whereas the model that treats choice sets as independent iRC accounts for local contrast effects.
Abstract: Experimental conjoint choice analysis is among the most frequently used methods for measuring and analyzing consumer preferences. The data from such experiments have been typically analyzed with the Multinomial Logit MNL model. However, there are several problems associated with the standard MNL model because it is based on the assumption that the error terms of the underlying random utilities are independent across alternatives, choice sets, and subjects. The Multinomial Probit model MNP is well known to alleviate this assumption of independence of the error terms. Accounting for covariances in utilities in modeling choice experiments with the MNP is important because variation of the coefficients in the choice model may occur due to context effects. Previous research has shown that subjects' utilities for alternatives depend on the choice context, that is, the particular set of alternatives evaluated. Simonson and Tversky's tradeoff contrast principle describes the effect of the choice context on attribute importance and patterns of choice. They distinguish local contrast effects, which are caused by the alternatives in the offered set only, and background contrast effects, which are due to the influence of alternatives previously considered in choice experiments. These effects are hypothesized to cause correlations in the utilities of alternatives within and across choice sets, respectively. The purpose of this study is to develop an MNP model for conjoint choice experiments. This model is important for a more detailed study of choice patterns in those experiments. In developing the MNP model for conjoint choice experiments, several hurdles need to be taken related to the identification of the model and to the prediction of holdout profiles. To overcome those problems, we propose a random coefficients RC model that assumes a multivariate normal distribution of the regression coefficients with a rank one factor structure on the covariance matrix of these regression coefficients. The parameters in this covariance matrix can be used to identify which attributes and levels of attributes are potentional sources of dependencies between the alternatives and choice sets in a conjoint choice experiment. We present several versions of this model. Moreover, for each of these models we allow utilities to be either correlated or independent across choice sets. The Independent Probit IP model is used as a benchmark. Given the dimensionality of the integrations involved in computing the choice probabilities, the models are estimated with simulated likelihood, where simulations are used to approximate the integrals involved in the choice probabilities. We apply and compare the models in two conjoint choice experiments. In both applications, the random coefficients MNP model that allows choices in different choice sets to be correlated RC displays superior fit and predictive validity compared with all other models. We hypothesize that the difference in fit occurs because the RC model accommodates correlations among choice sets that are caused by background contrast effects, whereas the model that treats choice sets as independent iRC accounts for local contrast effects only. The iRC model shows superior model fit compared with the IP model, but its predictions are worse than those of the IP model. We find differences in the importance of local and background contrast effects for choice sets containing different numbers of alternatives: The background contrast effect may be stronger for smaller choice sets, whereas the local contrast effect may be stronger for bigger choice sets. We illustrate the differences in simulated market shares that are obtained from the RC, iRC, and IP models in three hypothetical situations: product modification, product line extension, and the introduction of a me-too brand. In all of those situations, substantially different market shares are predicted by the three models, which illustrates the extent to which erroneous predictions may be obtained from the misspecified iRC and IP models.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new approach, COSTA, an acronym for "contribution optimizing sales territory alignment," which operates with sales response functions of any given concave form at the level of sales coverage units (SCUs) that cover a group of geographically demarcated individual accounts.
Abstract: The alignment of sales territories has a considerable impact on profit and represents a major problem in salesforce management. Practitioners usually apply the balancing approach. This approach balances territories as well as possible with respect to one or more attributes such as potential or workload. Unfortunately, this approach does not necessarily guarantee maximizing profit contribution. Thus, it does not provide an evaluation of the profit implications of an alignment proposal in comparison with the existing one. In consequence, several authors proposed nonlinear integer optimization models in the 1970s. These models attempted to maximize profit directly by considering the problems of allocating selling time (calling plus travel time) across accounts as well as of assigning accounts to territories simultaneously. However, these models turned out to be too complex to be solvable. Therefore, the authors have either approximated the problem or proposed the application of heuristic solution procedures on the basis of the suboptimal principle of equating marginal profit of selling time across territories. To overcome these limitations, we propose a new approach, COSTA, an acronym for "contribution optimizing sales territory alignment." In contrast to previously suggested profit maximizing approaches, COSTA operates with sales response functions of any given concave form at thelevel of sales coverage units (SCUs) that cover a group of geographically demarcated individual accounts. Thus, COSTA works with sales response functions at a more aggregated level that requires less data than other profit maximization approaches. COSTA models sales as a function of selling time, which includes calling time as well as travel time, assuming a constant ratio of travel to calling time. In addition, the formulation of the model shows that an optimal solution requires only equal marginal profits of selling time across sales coverage units per territory, but not across SCUs of different territories. Basically, COSTA consists of an allocation model and an assignment model, both of which are considered simultaneously. The allocation model optimally allocates the available selling time of a salesperson across the SCUs of his or her territory, whereas the assignment model assigns the SCUs to territories. Thus, COSTA predicts the corresponding profit contribution of every possible alignment solution, which enables one to perform "what-if"-analyses. The applicability of the model is supported by the development of a powerful heuristic solution procedure. A simulation study showed that COSTA provided solutions that are on average as close as 0.195% to an upper bound on the optimal solution. The proposed heuristic solution procedure enables one to solve large territory alignment problems because the computing time increases only quadratically with the number of SCUs and proportionally to the square root of the number of salespersons. In principle, we also show how COSTA might be expanded to solve the salesforce sizing as well as the salespersons' location problem. The usefulness of COSTA is illustrated by an application. The results of this application indicated substantial profit improvements and also outlined the weaknesses of almost balanced territories. It is quite apparent that balancing is only possible at the expense of profit improvements and also does not lead to equal income opportunities for the salespersons. This aspect should be dealt with separately from territory considerations by using territory-specific quotas and linking variable payment to the achievement of these quotas. Furthermore, the superiority of COSTA turned out to be stable in a simulation study on the effect of misspecified sales response functions. COSTA is of interest to researchers as well as practitioners in the salesforce area. It aims to revive the stream of research in the 1970s that already proposed sales territory alignment models aimed at maximizing profit. Such profit maximizing models are theoretically more appealing than approaches that strive to balance one or several attributes, such as potential or workload. COSTA's main advantage over previous profit maximizing approaches is that it is less complex. Consequently, COSTA demands less data so that even large problems can be solved close to optimality within reasonable computing times.

Journal ArticleDOI
TL;DR: In this paper, the authors propose a conceptual framework for understanding differences in the magnitude and timing of incumbents' responses to competitive entries, and show that an incumbent's reaction may cause the consumers to make inferences about the entrant's quality.
Abstract: Empirical studies examining responses to new product entries come to the puzzling conclusion that, in general, an incumbent reacts to a new entrant after a significant delay. Even easy-to-implement price cuts are observed after significant lag following entry. These findings seem to contradict the existing literature that either implicitly assumes or strongly advocates immediate defensive responses to limit competitive encroachment. When a competing firm enters the market, consumers may be uncertain about the entering firm's product quality. The incumbent firm through rigorous tests may fully know the entrant's quality. Suppose the incumbent aggressively lowers price. This may cause the consumers to wonder if indeed the entrant's quality is high. In other words, an incumbent's reaction may cause the consumers to make inferences about the entrant's quality. Such strategic implications of the incumbent's reactions have to be carefully analyzed before determining the optimal response by the incumbent. In this paper, we propose a conceptual framework for understanding differences in the magnitude and timing of incumbents' responses to competitive entries. We consider a model in which a monopolist incumbent firm faces competitive entry. The incumbent firm knows the true quality of the entrant with certainty. Although consumers are aware of the incumbent's product quality through their prior experience, they are initially uncertain of the entrant's product quality. In such a situation, a high-quality entrant has the incentive to signal her true quality through her strategic price choice. However, the uncertainty about the entrant's quality is favorable to the incumbent in the sense that consumers believe with a high probability that the entrant's quality is low. As a result, the strategic incentives facing the incumbent and the entrant oppose each other. While the entrant wants to signal her high quality, the incumbent wants to prevent her from doing so. We demonstrate that one way the incumbent can prevent the quality signaling is to select a higher than his optimal competitive duopoly price. In other words, the incumbent can prevent or "jam" the entrant's quality signaling by choosing a price higher than his optimal competitive price when consumers are fully informed about the entrant's true quality. Though the signal-jamming price is lower than the monopoly price, the price is substantially higher than the competitive price. This marginal reduction in the incumbent's price from the pre-entry monopolistic price represents a muted or lack of response by the incumbent to the competitive entry. However, once the entrant's quality gets revealed in subsequent periods through consumer usage and word of mouth, the entrant has no incentive to engage in quality signaling and the incumbent has no incentive to jam it. Therefore, the market reverts to the complete-information competitive prices, and the incumbent lowers his price considerably. This temporal pattern of muted price reduction in the first period followed by a sharp price reduction in the second period corresponds to a delayed defensive reaction in our model. Although the empirical studies suggest that the delayed reaction may arise due to factors such as managerial inertia or indecision, we demonstrate that such a behavior is indeed an optimal strategy for a profit-maximizing firm. Thus, our model reconciles empirical results with the equilibrium outcome of a strategic analytical framework. Furthermore, in an experimental setting, we test the predictive power of our framework and establish that consumers indeed form conjectures about the entrant's quality based on the incumbent's reactions. In the first experimental study, we find strong support for the notion that the incumbent's price reaction may indicate entrant's quality. In a follow-up study, we observe that whenever the incumbent lowers prices, respondents judge the quality of the entrant to be higher as compared to the case when prices are the same or increased. The managerial implication of this paper is that well-established incumbent firms should be cautious in the implementation of their defensive responses to product introductions of uncertain quality by competitors. Of particular concern are situations where the reactions are easily observable by consumers. A strong reaction may suggest that the incumbent takes the competitive threat seriously, leading consumers to believe in the quality of the competitor's product.

Journal ArticleDOI
TL;DR: The authors further explored the impact of the two components of bait and switch: out of stock and upselling, and concluded that deceptive bait-and-switch practices result in harm to consumers and should not be legalized.
Abstract: While the field of marketing science has long been interested in the effects of promotional efforts, public policy issues involving illegal marketer fraud and deception have generally not been addressed in this body of work. One key exception to this generalization is a Marketing Science article suggesting that the practice of "bait and switch" may be beneficial to consumers and, furthermore, that the Federal Trade Commission should investigate revising its standards to legitimize this practice Gerstner and Hess 1990. This finding and recommendation seemed so significant that it is surprising that the recommendation has not, to date, been explored in greater detail. In this paper we further explore the impact of the two components of bait and switch: out of stock and upselling. We do this by using Moorthy's 1993 theoretical modeling framework to systematically extend and assess the Gerstner and Hess model. We find that the previously reported increase in consumer welfare that arises from allowing out-of-stock conditions at retailers is actually due to the utility created by salespersons' explaining product features and benefits, not by the out of stock. Thus, the ramifications of both our legal and modeling analyses are that deceptive bait-and-switch practices result in harm to consumers and should not be legalized. Our paper concludes by proposing worthwhile modeling issues for further exploration. In addition, we suggest that our procedure for analyzing public policy issues by exploring the confluence of law, consumer behavior, and marketing models can serve as a useful exemplar for further contributions to public policy by marketing scientists.

Journal ArticleDOI
TL;DR: In a more general setting, this article showed that a law prohibiting bait and switch in a competitive market can reduce consumer well-being but never improve it, regardless of a law that prohibits the practice.
Abstract: In our 1990 Marketing Science paper we demonstrated that a law prohibiting bait and switch may have the surprising consequence of hurting the consumers it was designed to protect. Wilkie, Mela, and Gundlach (1998) postulate that this may be false if up selling is equally effective when the bait brand is available and when it is out of stock. We show here that our earlier conclusion is correct in a more general setting: A law prohibiting bait and switch in a competitive market can reduce consumer well-being but never improve it. When bait and switch occurs, it creates welfare gains, and when it would create welfare losses, it does not occur, regardless of a law prohibiting the practice.

Journal ArticleDOI
Richard Staelin1
TL;DR: This paper reflected on the complexity of the editorial and reviewing process of marketing science publications. But they did not discuss the complexity in the review process. But the editor-in-chief of Marketing Science
Abstract: Reflection of the editor-in-chief of Marketing Science about the complexity of the editorial and reviewing process.

Journal ArticleDOI
TL;DR: In this article, the authors pointed out that the benefits of bait-and-switching are predicated on a single component (availability) within the broader domain of bait and switch, and the assumption that one of the parameters in the consumer utility function differs with the availability of advertised brands, and a further assumption that no other parameter in the model will change when the availability condition changes.
Abstract: We applaud the advances in this colloquy and the areas of convergence that are emerging. However, this reply points out that the purported benefits of"bait and switch " found in Hess and Gerstner (1998) are predicated upon (i) only a single component (availability) within the broader domain of bait and switch; (ii) the assumption that one of the parameters in the consumer utility function differs with the availability of advertised brands; and (iii) a further assumption that no other parameters in the model will change when the availability condition changes. After assessing these developments, we conclude that i) the legal status of bait-and-switch schemes is fine as it stands; ii) when understood in their true complexity, parameters in the consumer utility functions likely will not differ with regard to availability, thus obviating the finding of increased consumer welfare; and iii) even if it is believed that utility functions would differ, effects on other model parameters clearly suggest that consumers will be worse off with bait and switch. Despite these differences, however, we are pleased with the developments the dialogue has produced.

Journal Article
TL;DR: Van den Bulte and Lilien as discussed by the authors removed the acknowledgment footnote from the final page proofs of "Bias and Systematic Change in the Parameter Estimates of Macro-Level Diffusion Models," by Christoph Van den Bui and Gary L. Lilien, Vol. 16 No. 4 No.4 1997, pp. 338Â 353.
Abstract: Due to a printer's error, the acknowledgment footnote (footnote 4, p. 350) was deleted from the final page proofs of "Bias and Systematic Change in the Parameter Estimates of Macro-Level Diffusion Models," by Christoph Van den Bulte and Gary L. Lilien, Vol. 16 No. 4 1997, pp. 338Â 353.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the availability of several direct sellers, including catalog and Internet marketers, in parallel with the conventional retail stores, such as Amazon, eBay, and Google.
Abstract: Consumers now purchase several offerings from direct sellers, including catalog and Internet marketers. These direct channels exist in parallel with the conventional retail stores. The availability...

Journal ArticleDOI
Richard Staelin1
TL;DR: One of the major responsibilities of the Editor and the Area Editor is to decide on the appropriateness of a submitted paper as discussed by the authors, and four major criteria are used in making this determination.
Abstract: One of the major responsibilities of the Editor and the Area Editor is to decide on the appropriateness of a submitted paper. As stated in prior editorials Staelin 1996, 1997, four major criteria are used in making this determination: 1 Will the paper be of interest to our readers? 2 Is it readable? 3 Is it not wrong? 4 Does it make a contribution to the field? Assuming the determination is positive, it is the job of the Editor, with the help of the review team, to specify a course of action that he believes will lead to a publishable paper. What then leads the review team to encourage the publishing of three papers that directly comment on each other's work and, in the process, alter the existing practice of this journal?

Journal ArticleDOI
TL;DR: The authors proposed an estimation method that directly estimates Batsell and Polking's model, making the BP model more accessible to potential users, compared to the indirect estimation method suggested by the authors.
Abstract: Batsell and Polking (1985) developed one of the important choice models that address the problem of independence from irrelevant alternatives. In this note, we propose an estimation method that directly estimates Batsell and Polking's model. Compared to the indirect estimation method suggested by Batsell and Polking, the direct method is simpler, making the BP model more accessible to potential users.