scispace - formally typeset
Search or ask a question

Showing papers in "The Journal of Business in 1980"


Journal Article•DOI•
TL;DR: In this paper, the authors compared the traditional and extreme value methods and concluded that the extreme value method is about 21/2-5 times better, depending on how you choose to measure the difference.
Abstract: The random walk problem has a long history. In fact, its application to the movement of security prices predates the application to Brownian motion.' And now it is generally accepted that, at least to a good approximation, ln (S), where S is the price of a common stock, follows a random walk.2 The diffusion constant characterizing that walk for each stock thus becomes an important quantity to calculate. In Section II, we describe the general random walk problem and show how the diffusion constant is traditionally estimated. In Section III, we discuss another way to estimate the diffusion constant, the extreme value method. In Section IV, we compare the traditional and extreme value methods and conclude that the extreme value method is about 21/2-5 times better, depending on how you choose to measure the difference. In Section V, we discuss the use of this method for the estimation of the variance of the rate of return of a common stock. If S is the price of a common stock, it is now generally accepted that In (S) follows a random walk, at least to a very good approximation. The diffusion constant characterizing that walk, which is the same as the variance of the rate of return, thus becomes an important quantity to calculate and is traditionally estimated using closing prices only. It is shown that the use of extreme values (the high and low prices) provides a far superior estimate.

1,753 citations


Journal Article•DOI•
TL;DR: In this article, the problem of estimating capital asset price volatility parameters from the most available forms of public data is examined, namely, data appearing in the financial pages of the newspaper.
Abstract: This paper examines the problem of estimating capital asset price volatility parameters from the most available forms of public data. While many varieties of such data are possible, we shall consider here only those which are truly universal in their accessibility to investors, namely, data appearing in the financial pages of the newspaper. In particular, we shall consider volatility estimators which are based upon the historical opening, closing, high, and low prices and transaction volume. Alternative estimators of volatility may be constructed from such data as significant news events, "fundamental" information regarding a company's prospects, and other forms of publicly available data, but these will not be considered here. Any parameter-estimation procedure must begin with a maintained hypothesis regarding the structural model within which estimation is to be made. Our structural model is given exposition in Section II. Section III discusses the "classical" Improved estimators of security price volatilities are formulated. These estimators employ data of the type commonly found in the financial pages of a newspaper: the high, low, opening, and closing prices and the transaction volume. The new estimators are seen to have relative efficiencies that are considerably higher than the standard estimators.

1,363 citations



Journal Article•DOI•
TL;DR: A review of recent developments in econometric demand analysis which may be of interest in market research is given in this article, with particular attention given to models which yield tree structures of similarities between alternatives.
Abstract: I understand the discipline of marketing exists to answer questions such as: "Will housepersons buy more Brand A soap if its perfume content is increased?" Traditional econometric demand analysis provides no answer. Its attention has been concentrated on consumption levels of broad commodity classes (e.g., housing services), examined using aggregate market data, with demand models constructed on the twin pillars of economic rationality and consumer sovereignty. The market researcher has understandably looked elsewhere-to psychology and survey research-for answers to his questions. Realities have forced econometric demand analysts to broaden their perspective. Public intervention in the supply of some commodities, notably in the areas of transportation, energy, and communications, have required economists to recognize the marketing considerations implicit in issues of policy. (The decision of whether to build and how to design a public This paper reviews several recent developments in econometric demand analysis which may be of interest in market research. Econometric models of probabilistic choice, suitable for forecasting choice among existing or new brands, or switching between brands, are surveyed. These models incorporate attribute descriptions of commodities, making them statistical counterparts of the Court-GrilichesLancaster theory of consumer behavior. Particular attention is given to models which yield tree structures of similarities between alternatives. Also reviewed are methods for estimating econometric models of probabilistic choice from "point-ofsale" sample surveys. * Prepared for presentation at the Conference on Interfaces between Marketing and Economics, Graduate School of Management, University of Rochester, April 7, 1978. Research was supported in part by the National Science Foundation through grant SOC75-22657 to the University of California, Berkeley. Portions of this paper were written while the author was an Irving Fisher Visiting Professor of Economics at the Cowles Foundation for Research in Economics, Yale University.

751 citations


Journal Article•DOI•
TL;DR: In this paper, the authors examine the nature of interfirm cash tender offers and show that stockholders of both acquiring and acquired firms realize a significant capital gain regardless of the outcome of the offer or whetheror not they tender their shares.
Abstract: It is commonly believed that the interfirm cash tender offer is an attempt by the bidding firm to purchase the target shares and profit from their subsequent market appreciation. This belief is inconsistent with the available evidence. While acquiring firms earn a positive return from the tender offer, they do not realize a capital gain from the target shares that they purchase. In the 161 successful offers in this study, bidding firms paid target stockholdei-s an average premium of 49% for the shares they purchased. This premium is calculated relative to the closing price of a target share 2 months prior to the announceinent of the offer. The average appreciation of the target shares through I month subsequent to the execution of the offer was 36%, relative to this same benchmark. In sum, target This paper examines the nature of interfirm cash tender offers. The tender offer is viewed as a bid for the right to conitrol the resources of the target firm. A model based on this interpretation, efficient and competitive mar-kets, and rational expectations is developed and tested. The implications of this model are shown to be consistent with the following documented empirical facts concerning the average tender offer: the stockholders of both acquiring and acquired firms realize a significant capital gain; acquiring firms suffer a significant capital loss on the target shares they pul-chase; target stockholders realize a significant capital gain regardless of the outcome of the offer or whetheror not they tender their shar-es.

466 citations



Journal Article•DOI•
TL;DR: In this paper, the authors present a series of laboratory experiments which explore the behavior of computer-automated double-auction markets where face-to-face buyer-seller interaction has been eliminated.
Abstract: Over the last 2 decades there has evolved a considerable volume of literature describing the results of various experimental games in market decision making as well as their implications concerning economic theories of market price and quantity determination (see, e.g., Smith 1962, 1964, 1965, 1976b; Miller, Plott, and Smith 1977; Plott and Smith 1978; Williams 1979). The relevance of using controlled laboratory experiments to study resource allocation mechanisms has recently been addressed formally by V. L. Smith (1980), who has also provided fine summaries of the more important experimental results derived from contract price observations generated under various market institutions (Smith 1976b, 1980). The potential for increased efficiency obtainable through various electronic auction mechanisms has been recognized for many years (Cassidy 1967). However, recent discussions of automating and computerizing auction markets have generally been in relation to the creation of a congressionally mandated national securities market (e.g., Wall Street Joutrnal 1978a, 1978b, 1978c). Charged with overseeing the developIn 1975 the Securities and Exchange Commission received a congressional mandate to move toward the creation of a national stock trading system. A potentially major step toward the development of such a system is the Cincinnati Stock Exchange's highly controversial pilot project which is currently testing an "all electronic" computerized trading mechanism. This paper reports on the design and analysis of a series of laboratory experiments which explore the behavior of computerautomated "doubleauction" markets where face-to-face buyer-seller interaction has been eliminated. The experiments also serve to demonstrate the use of the PLATO computer as a research tool for social scientists interested in laboratory experimental tech-

119 citations



Journal Article•DOI•
TL;DR: In this paper, the authors investigated the impact of the person and role relationships on the job outcomes of salesmen, and found that job outcomes are a function of role ambiguity and motivation but not necessarily job tension.
Abstract: Research into the sales force suggests that the behavior of salespeople will be a function of the person; the interactions the person has with customers, managers, and significant others; and the situation or context in which these interactions take place (Churchill, Ford, and Walker 1976; Bagozzi 1978). This article addresses the impact of two classes of these determinantsthat is, the person and role relationships-as they influence particular outcomes on the job. Specifically, motivation and felt role strain and role ambiguity are investigated as they affect certain emotional, cognitive, and behavioral (i.e., performance) outcomes of salespeople. The discussion begins with a conceptualization of three broad classes of job outcomes experienced by salespeople. Next, motivation, role strain, and role ambiguity are defined, and the mechanisms relating these independent variables to job outcomes are specified. Third, the methodology and structural equation models used to test the hypotheses are presented. Included in the treatment is a discussion of the sample and Salesmen experience three broad outcomes on the job. The first deals with feelings about the self and is termed self-esteem. The second refers to actual performance, such as dollar volume of sales achieved, new business generated, or expenses incurred. The third represents evaluations about specific dimensions of the work situation and is termed here job satisfaction. This article presents the results of a study investigating the determinants of each type of job outcome for a sample of industrial salesmen. Using a structural equation methodology, the research shows job outcomes are a function of role ambiguity and motivation but not necessarily job tension.

87 citations


Journal Article•DOI•
TL;DR: The authors surveys recent theoretical work on consumer information acquisition, primarily by economists, arguing that consumer researchers and economists have much to learn from each other, and that there are significant advantages to greater collaboration between economists and consumer researchers.
Abstract: Traditionally economists and consumer researchers have studied consumer behavior, especially under imperfect information, without paying a great deal of attention to one another. This is likely due to differences both in purpose and technique. Consumer researchers are generally concerned with individual choice, while economists focus on market outcomes. Even when economists propose detailed models of individual behavior, they tend to use these models to deduce propositions concerning overall market structure. Furthermore, consumer researchers draw data from both surveys and experiments, sources which economists distrust. This paper surveys recent theoretical work on consumer information acquisition, primarily by economists, arguing that consumer researchers and economists have much to learn from each other. There are two reasons for concentrating on theory. First, comprehensive surveys by James Bettman (1977) and Joseph Newman (1977) deal with the empirical literature. Second, and significantly, accompanying the recent attention afforded consumer information acquisition This paper surveys recent theoretical work by economists on consumer information aquisition. Both models of individual behavior and models of market equilibrium are discussed. These models are used to illustrate the point that there are significant advantages to greater collaboration between economists and consumer researchers.

78 citations


Journal Article•DOI•
TL;DR: In this article, the competitive supply and pricing of highways with random traffic is modeled and the classic efficient-pricing results of Knight and Walters are shown to be valid in a world with uncertain traffic flows.
Abstract: This paper models the competitive supply and pricing of highways with random traffic. It adds the missing theory, of the firm to Knight's highway problem and shows that competition delivers efficient prices. The classic efficient-pricing results of Knight and Walters are shown to be valid in a world with uncertain traffic flows. Competitive prices exceed marginal cost by an amount equal to expected marginal congestion cost. The mean throughput of each highway firm is less than maximum throughput. Even though the expectation of excess capacity is positive, random traffic jams will occur on an efficiently priced highway. Pricing of mixed traffic is also analyzed. (Authors)

Journal Article•DOI•
TL;DR: In this paper, the authors present an analysis of the economic consequences of infinitely variable product specification in the presence of diverse consumer preferences and some minimal degree of economies of scale near the origin and thus might be considered a study of the contemporary high technology economy.
Abstract: Three results of the study which are of general interest are the following: 1. It is never optimal to produce any good at minimum average cost, but always better to increase variety at the expense of average cost when any good reaches this level of output. 2. A structure very similar to that of Chamberlin's monopolistic competition is the "most perfect" market structure that can be generated, being the Nash equilibrium of firms under conditions of perfect information, noncollusion, perfect flexibility, and free and willing entry. Thus, this structure cannot be regarded as "imperfect competition"' and is here referred to as "perfect monopolistic competition." The traditional "perfect competition" structure cannot exist under the conditions posited for the This paper is an analysis of the economic consequences of infinitely variable product specification in the presence of diverse consumer preferences and some minimal degree of economies of scale near the origin and thus might be considered a study of the contemporary high-technology economy. The study covers both the welfare economics and the market structures associated with such an economy. The emphasis throughout is on the degree of product variety which is optimal (in the welfare economics sections) or will be generated by the market and on how the degree of variety differs between different market structures and between the market and the optimum. *The background analysis on which the results of this paper are based is set out in detail in Lancaster (1979), which was unpublished at the time of the original presentation of the paper. The author wishes to acknowledge the assistance of the National Science Foundation, grant SOC 75-14252. A preliminary version of the analysis, given in Lancaster (1975), reaches some incorrect conclusions concerning the properties of market equilibria because it did not include provision for "outside goods" (goods outside the product class being considered) without which there are no stable market equilibria. 1. In this sense Chamberlin was correct in asserting that monopolistic competition was not a form of imperfect competition. Since he was not able to handle the analysis of variable product differentiation, he was not able to show that the monopolistic competition structure was inherent in certain types of situation.


Journal Article•DOI•
TL;DR: Spong et al. as mentioned in this paper theoretically and empirically examined the relationship between controlling and minority share prices in closely held banks and found significant price premiums on controlling shares, averaging 50%-70%.
Abstract: Valuations of common stock take place for one of two reasons. First, in the more standard context, stock is valued by investors for its investment potential. Second, it is valued out of legal necessity for estate, gift tax, and litigation purposes. In any circumstance, if the shares are minority shares of a widely held, actively traded company's stock, the determination of market value is generally not controversial. However, if the shares are controlling shares and/or closely held,' the determination of market value becomes much more difficult. Where the valuation of closely held shares is for legal purposes, value is "determined first by appraising the value of the enterprise, and then by allocating some portion of that value to the shares in question"' (Feld 1974, p. 934). Under certain circumstances, This paper theoretically and empirically examines the relationship between controlling and minority share prices in closely held banks. For price premiums on controlling shares to exist, three conditions must be met: control must provide special benefits unavailable to minority shareholders; control group members, individually, must be able to exploit control benefits; and control shares must be effectively isolated from minority shares in the market. Empirical results confirm the existence of significant price premiums on controlling shares, averaging 50%-70%. Relatively small groups tend to dominate activity in the control shares market indicating that they value control more highly than larger groups. * This research was supported in part by the University of Kansas School of Business Research Fund provided by the Fourth National Bank & Trust Co., Wichita, and in part by the University of Kansas General Research Fund, grant 36652038. The analysis and conclusions are solely the responsibility of the authors and do not necessarily reflect views of the Board of Governors of the Federal Reserve System or the Federal Reserve Bank of Kansas City, or the grant support institutions. Comments by Kenneth R. Spong, R. Corwin Grube, Ray Ball, and, especially, Michael Bradley, the reviewer, have been helpful. 1. "Closely held" stock is usually defined as stock that is owned by a small number of persons and is not listed on an exchange or traded over the counter.

Journal Article•DOI•
TL;DR: Zellner et al. as mentioned in this paper proposed a consumer choice model based on the assumption that the consumer maximizes utility net of information costs and that advertising lowers these information costs, and this model is used to generate the precise form of an estimable sales-advertising function.
Abstract: In spite of the great attention that marketing researchers have given to it, the determination of the sales-advertising function remains primarily an exercise in fitting statistical models to aggregate data sets The approach of numerous studies to modeling the sales-advertising function has been to let the market data determine the "final" form of the model' Although such interesting phenomena as cumulative effects of advertising, the two-way causality between sales and advertising, and competitive advertising effects have been modeled in past studies, there has been little effort to explicitly link estimable salesadvertising functions to a theory of consumer behavior Because the major use of the sales-advertising function is to set the firm's optimal level of advertising, it is necessary to obtain a model that adequately represents the true relationship between advertising and consumer demand for the This paper emphasizes the usefulness of modeling the salesadvertising function based explicitly on a theory of individual consumer choice The proposed model of consumer choice rests on two assumptions: that the consumer maximizes utility net of information costs, and that advertising lowers these information costs This model is used to generate the precise form of an estimable sales-advertising function The function allows for the systematic measurement of own and competitive advertising effects Also, the consumer choice model has implications that can be empirically tested *Research supported in part by National Science Foundation under grant SOC 73-05547 I wish to thank Arnold Zellner for his invaluable help in the writing of this paper Robert Blattberg, John Gould, Gregg Jarrell, Michael Jensen, John Long, Peter Pashigian, and Subrata Sen also provided helpful comments Their help is gratefully acknowledged I alone am responsible for any remaining errors 1 Notable examples are the studies by Palda (1964), Bass (1969), Bass and Parsons (1969), Bass and Clarke (1972), Beckwith (1972), Schmalensee (1972), Clarke (1973), Wildt (1974), and Helmer and Johansson (1977)

Journal Article•DOI•
TL;DR: Conjoint analysis (or trade-off analysis) has become increasingly used in marketing as discussed by the authors, which attempts to parse the appeals of a product or concept into a set of factors, assign utility values to the separate levels of each factor, and finally, under the assumption of separability, determine the utility value of any product and concept by adding (or multiplying) the utility values of the individual level of each of the factors embodied in the product and concepts.
Abstract: Conjoint analysis (or trade-off analysis) has become increasingly used in marketing. Essentially, the techniques attempt to parse the appeals of a product or concept into a set of factors, assign utility values to the separate levels of each factor, and finally, under the assumption of separability, determine the utility value of any product or concept by adding (or multiplying) the utility values of the individual levels of each of the factors embodied in the product or concept. A good expository description of conjoint analysis is given in Green and Srinivasan (1978). For notational convenience, I present a brief description herein. Formally, if we are considering n factors, with the ith factor (i = 1, . . . , n) having mi levels, then we shall represent a bundle as an n-tuple (i1, i2, . . , ia), where

Journal Article•DOI•
TL;DR: In this paper, a management science team from another galaxy should enter a typical U.S. supermarket and measure sales response to price by varying shelf prices and observing sales, it would be surprised to find that the store seemed to be pricing each item much too low to maximize profit.
Abstract: If a management science team from another galaxy should enter a typical U.S. supermarket and measure sales response to price by varying shelf prices and observing sales, it would be surprised to find that the store seemed to be pricing each item much too low to maximize profit. Thus, for example, a typical store markup for a grocery product might be in the 15%-25% range. This would represent profit-maximizing behavior if the price elasticity were in the range 5-7. However, price elasticities obtained by instore experiment usually give numbers less than two and sometimes much less. For example, the measurements of Henderson, Brown, and Hind (1961) imply elasticities for fruits as low as 0.2. Thus, the store acts as if the customer is considerably more sensitive to prices than customer actions in the store seem to indicate. There is a reason for this. The store has a special apprehension. Even though a customer, once in the store, may buy the item at a higher price, will the customer come back? A pricing policy that maximizes profits taking into account the problem of store loyalty may call for much lower prices than implied by strictly in-store measurements. Supermarket chains go to great effort to project an overall impression of low prices and good values for the money. They run newspaper ads A two-stage theory of price setting postulates that customers once in the store purchase goods to maximize utility. This determines short-run price response. The store then maximizes short-run profit subject to a constraint on customer utility delivered. Utility level becomes a policy parameter that determines, in part, the long-run attractiveness of the store to the customers.

Journal Article•DOI•
TL;DR: In this article, the Bass model of the sales of new consumer durables can be equivalently represented as a diffusion of new products, and some of the assumptions which underlie it are discussed.
Abstract: Traditionally, marketing scholars have focused their attention on demand analysis; that is, they have attempted to explain consumers' response to various marketing activities. By and large, the supply side, or the producer's problems, have been ignored. The Bass paper (this issue) is one of the first marketing studies to analyze both the consumer's and producer's problems. The analysis is within the context of the diffusion of new products. In this note I will synthesize the Bass work and comment on some of the assumptions which underlie it. The Bass (1969) model of the sales of new consumer durables can be equivalently represented as

Journal Article•DOI•
TL;DR: In this article, the authors investigated automobile demand over the period 1965-75 using a disaggregate approach and found that automobile demand segmented into the five automobile classifications-subcompact, compact, intermediate, standard, and luxury-revealed much additional information.
Abstract: A research study in 1976 by one of the authors (Carlson 1978) investigated automobile demand over the period 1965-75 using a disaggregate approach. Previous studies by Chow (1957), Nerlove (1957), Suits (1958), Smith (1975)1 and others had explained auto purchase by aggregating all sizes and classes of automobile into one demand function. The 1976 study suggested that as little aggregation as possible is desirable in a model of the new car market and that automobile demand segmented into the five automobile classifications-subcompact, compact, intermediate, standard, and luxury-revealed much additional information. While the purpose of this earlier study was model building (with emphasis on computing elasticities and demonstrating a practical application of "seemingly unrelated regression") and not forecasting, the implications for future demand were discussed, and the tentative forecasts made were quite accurate. The model forecast 1977 sales at 10.3 million, 1978 sales at 10.7 million, and 1979 sales at less than 11.0 million. Actual new car sales for 1977 and 1978 were 10.7 and 10.9 million cars, respectively, with 1979 sales projected around 11 million. One test for model validity is its ultimate forecasting ability, This research is an addendum to "Seemingly Unrelated Regression and the Demand for Automobiles of Different Sizes, 1965-75: A Disaggregate Approach," published by the Journal of Business in April 1978. The purpose of this additional study on automobile demand is (1) to point out the forecasting accuracy of the market segmentation approach and (2) to estimate the effects of the energy crisis on automobile demand for the years 1979-83. While this paper presents no additional theory on the topic, it is felt that these forecasts are of considerable interest in light of the importance of the auto industry to the nation's economy.

Journal Article•DOI•
TL;DR: In this paper, an iterative generalized least-squares (GILS) procedure was developed for estimating the parameters of certain economic relations characterized by first-order autocorrelated disturbances.
Abstract: The purpose of this paper is to develop a procedure to estimate the parameters for a class of distributed lag models by making use of data aggregated over time. The general problem of aggregating economic relations over time has already received considerable attention in the literature. Theil (1954) explained the difficulties of obtaining the correct aggregate relation when lagged variables appeared in the micro relationship, but he did not consider the role of the disturbance term. Mundlak (1961) studied the effects of aggregation over time on the partial adjustment model and developed the relationship between the parameters of the micro relation and those of the mispecified (to accommodate aggregate data) time-aggregated macro model. Mundlak was also unconcerned with the full role of the disturbances in such circumstances. Morigouchi (1970). taking account of disturbances, was able to quantify both the estimation bias and the loss of efficiency resulting from temporal aggregation for certain cases of the independent variable .X (t). More recently, Rowe (1976) has demonstrated the effect of temporal aggregation on regression t-ratios and R 2s. All of these studies are similar in that they did not devise an estimation procedure to correct the bias arising from temporal aggregation. In a later secThis paperdevelops an iterative generalized least-squar-es (GILS) procedul-e for estimating the par-ameters of certain economic relations characterized by first-order autocorrelated diSturbances (!1 hxt + Et, et = /) Et-l + IIt) when the available dclata have been aggregated over time. The estimation procedlure is conditional on knowledge of the level of aggregation (the number of subilltervals) making up the aggregate data interval. An example of the estimation procedure is provided using a set of annual sales-advertising data.

Journal Article•DOI•
TL;DR: In this paper, the authors examined consumer behavior within the warranty period and found that the consumer recognizes that the use of an appliance is inherently risky, subject to some probability of failure, and takes this into account when choosing a warranty and the amount of use of the appliance.
Abstract: For many years appliance manufacturers have offered warranties with durations stated in chronological time (Gerner and Bryant 1976). Such warranties are designed to cover costs of repair in the event that the appliance fails while the warranty is in force. That appliance warranties are universally offered with chronological durations is strong evidence of the manufacturers' belief that reliability, at least during the first years of appliance life, is a technological property of the appliance. As such it is controlled by the manufacturer via product design and specification. But this view ignores the role consumers play. This paper examines consumer behavior within the warranty period. In the model the consumer recognizes that the use of an appliance is inherently risky, subject to some probability of failure. The consumer takes this into account when choosing a warranty and the amount of use of the appliance. Both the demand for the warranty comprehensiveness and the demand for use are estimated, using data on television use and repair. The estimates provide some evidence that use, and consequently repair, depends on warranty coverage.

Journal Article•DOI•
TL;DR: In this article, a simple model involving two traders, each of whom stands to gain by trading, is developed, and the effects of changing the gains and costs from trading on the amount of "marketing" effort expended by each of the traders are examined.
Abstract: Markets have played a central role in economic analysis at least since the publication of Adam Smith's Wealth of Nations. Various forms of market organization-pure competition, duopoly, oligopoly, and monopoly-have been considered in great detail in the literature, and virtually every course in macroeconomic theory devotes a substantial amount of time to explaining the nature and implications of these different kinds of markets. The traditional approach in economic analysis has been to assume a particular form of market organization and then to analyze output, price, and cost behavior in the context of the assumed market form. A closely related matter, though one which seems to have received less attention, is the question of where the market comes from in the first place. The process of making a market is, after all, itself an economic activity in the sense that it uses scarce resources. In the usual A simple model involving two traders, each of whom stands to gain by trading, is developed. The traders must expend resources to find each other so that trade can take place. The effects of changing the gains and costs from trading on the amount of "marketing" effort expended by each of the traders are examined. It is found that increasing the gains from trading need not increase the amount of search expenditures made by the traders and may in fact result in a reduction in the probability that trade occurs. This is because there are external effects that the traders do not take into account when making search decisions that are optimal from their individual viewpoints. These externalities suggest that both traders may gain from the emergence of a middleman who acts as a market maker and takes the externalities into account. * This paper was written for the Rochester Conference on Interfaces between Marketing and Economics, April 7-8, 1978. The author has received several useful comments from members of the Applied Price Theory workshop at the University of Chicago and from the Economic Theory seminar at the University of Toronto. Comments and suggestions from Thomas Borcherding, Dennis Carlton, Jack Carr, Arthur De Vany, George Haines, Albert Madansky, Naser Saidi, and Lester Telser have been particularly helpful. All errors are, of course, the responsibility of the author.

Journal Article•DOI•
TL;DR: This article found that the inclusion of human capital appears to have little meaningful effect on both general capital asset pricing and individual investor portfolio composition, due to the fact that relationships between returns on almost all types of human resources and those of marketable financial assets are so weak as to make these two capital asset groupings effectively separable.
Abstract: In the past 2 decades, much progress has been made in the areas of "human capital theory" and " modern portfolio theory," with profound influence upon academic thought and practice.1 But interestingly enough, though human capital theory recognizes human resources as part of an individual's capital asset holdings and modern portfolio theory deals with the pricing of these holdings, only recently have efforts been made to bring these two areas together.2 Employing a popular extension of the Sharpe-Linter capital asset pricing model which allows for the existence of nonmarketable human capital, this study finds that empirically the inclusion of human capital appears to have little meaningful effect upon both general capital asset pricing and individual investor portfolio composition. This is shown to arise from the fact that relationships between returns on almost all types of human capital and those of marketable financial assets are so weak as to make these two capital asset groupings effectively separable.

Journal Article•DOI•
TL;DR: In this paper, the authors discuss the rationale for the popular use of individual level analysis of preferences in marketing and discuss some of the limitations of a popular quantal choice approach, namely, the LOGIT model.
Abstract: In his paper, "On Conjoint Measurement and Quantal Choice Models," Albert Madansky (this issue) points out that conjoint analysis' usually deals with each respondent separately. It is an individual level analysis in the sense that the idiosyncratic parameters for each individual are estimated using only the preference judgments of that individual (for an illustration, see Parker and Srinivasan [1976, p. 1009]). On the other hand, quantal choice models are usually estimated at the aggregate level in the sense that a common set of parameters is estimated from the choice data of a sample of individuals (for an illustration, see McFadden [1976, p. 130]). Variation in the utility function across individuals is incorporated in such quantal choice models through a vector of measured characteristics of the individual. In this comment, we first discuss the rationale for the popular use of individual level analysis of preferences in marketing. Although conjoint analysis is usually carried out at the individual level and quantal choice approach is usually carried out at the aggregate level, it is possible to use conjoint analysis at the aggregate level (e.g., see Srinivasan and Shocker 1973) and use quantal choice analysis at the individual level (e.g., see Jain et al. 1979). The second part of this comment discusses some of the limitations of a popular quantal choice approach, namely, the LOGIT model, in individual level analysis.

Journal Article•DOI•
TL;DR: In this article, the authors investigate optimal consumer decision rules when the consumer is faced with some attributes of a good whose qualities he can determine prior to purchase (search attributes) and with some other attributes for the same good, which he can only determine only after purchase (experience attributes).
Abstract: Unbeknown to either of us, Wilde (this issue) and I have been pursuing similar lines of inquiry. We are both investigating optimal consumer decision rules when the consumer is faced with some attributes of a good whose qualities he can determine prior to purchase (search attributes) and with some other attributes for the same good whose qualities he can determine only after purchase (experience attributes). It is not surprising, therefore, that I think this is a particularly fruitful line of inquiry. Since Wilde and I are in such general agreement, little purpose can be served by my detailing the little points of disagreement between us. Instead, I would like to focus on a question that Wilde neglects. Why should anybody be interested in the consumer searching, on one hand, or experiencing, on the other? This distinction between search attributes and experience attributes is the most decisive determinant of market behavior for a considerable number of characteristics important for both marketing and economics. Let me give you some feel for this proposition. To make the analysis tractable, I have assumed that goods differ by a single parameter, R = the ratio of the variance by brands in the utility of a representative consumer of all experience attributes to this variance for search attributes. The greater the value of R, the more a consumer will choose to get his information by way of experience and the less by way of search. Hence R measures the relative importance of experience and search. Clearly R is unobservable. However, one can get observable implications from the behavior of R. We can determine from theory whether some given observable market variable will be positively or negatively related to R. Those market variables which are both positively related to R or both negatively related to R will be positively related to each other. When one market variable is positively related to R and one is

Journal Article•DOI•
TL;DR: In this paper, the authors examined the effects of a particular Securities and Exchange Commission (SEC) requirement that mandates banks reporting to them to have their financial information certified by external auditors, and found that this imposition affected the stock returns negatively for those banks that did not voluntarily choose to employ external audits prior to the regulation.
Abstract: One of the most striking characteristics of the securities industry is that it is very heavily regulated, especially with respect to information production. There are at least three identifiable levels of intervention dealing with the production of financial information: first, corporations are required to provide certain types of information and some minimum amount of information; second, the financial information, for the most part, has to be certified (audited); and finally, those who certify the statements have themselves to be licensed. All these regulations are instituted for the alleged protection of the investing public. This study will examine the second type of intervention mentioned above, namely, a mandatory audit requirement. It will specifically examine the effects of a particular Securities and Exchange Commission (SEC) requirement that mandates banks reporting to them to have their This paper examines the effects on stock returns of the mandatory audit regulation imposed on banks by the Securities and Exchange Commission in 1971. It is found that this imposition affected the stock returns negatively for those banks that did not voluntarily choose to employ external audits prior to the regulation. This is contrary to claims that such a mandatory regulation would be beneficial. The methodology employed is based on the familiar market model and the Gonedes approach.

Journal Article•DOI•
TL;DR: Little and Shapiro as discussed by the authors argue that although stores could take temporary advantage of their customers, their long-term success depends on building customer loyalty, and they argue that the incredible quantity of information needed for their kind of pricing rules has made it virtu-
Abstract: Little and Shapiro (this issue) raise a number of provocative points in their paper, "A Theory for Pricing Nonfeatured Products in Supermarkets," which illustrate some of the commonalities between marketing and economics. Economists preach the principles of profit maximization and marginal cost-marginal revenue pricing, yet these concepts are often not reflected in marketing analyses of applied real-life problems. It is heartening, therefore, to see a marketing paper heavily dosed with economics. In the traditional role of a discussant, let me first briefly try to describe what I see as the central message of the Little-Shapiro paper. Their theoretical treatise addresses the problem of a supermarket manager faced with the task of setting prices for upwards of 6,000 products. They argue that although stores could take temporary advantage of their customers, their long-term success depends on building customer loyalty. Assuming a traditional well-behaved customer utility function, they derive the store pricing rule which maximizes store profits for a given level of customer utility. They also show that solving the dual to this problem-maximizing customer utility holding store profits constant-as expected yields the same pricing rule. Thus, in effect, the store's pricing problem can be reduced to two steps. First, for any given level of customer utility, find the price vector (not necessarily unique) which makes the store best off. Then, assuming the store knows how customer utility maps into patronage, select that level of utility which maximizes total store profits. These results are neither surprising nor new. Indeed, the analysis is similar to what I would expect if this problem were given to most economists. A far more interesting problem, and unfortunately one not pursued as fully by Little and Shapiro, is how in fact stores should (or do) implement a pricing policy. They argue that the incredible quantity of information needed for their kind of pricing rules has made it virtu-

Journal Article•DOI•
TL;DR: In this paper, the authors construct a model in which it is rational for an agent with constant absolute risk aversion to select the more risky of two investments if and only if his wealth is small.
Abstract: The existence of insurance, futures markets, and stock markets indicates the importance of risk aversion in economics. In spite of this, there are few studies which attempt to estimate the degree of an agent' s risk aversion (see Friend and Blume 1975).1 Any study which seeks to quantify risk aversion, regardless of the setting, must necessarily resort to indirect measures. In this article We construct a model in which it is rational for an agent with constant absolute risk aversion to select the more risky of two investments if and only if his wealth is small. This paradoxical behavior is induced by the presence of discounting and a bankruptcy constraint, and it bodes ill for the empirical resolution of the controversial assumption of risk-averse agents. *This research was partially supported by the National Science Foundation through grant SOC-7808985. 1. This gap is especially significant given the continued controversy regarding the nature of the objective function when agents must produce in an uncertain environment. Some (see Sandmo 1971) have assumed that firms maximize expected utility where utility is a strictly concave function of profits. The model based on the strict concavity assumption gives rise to implications that are quite different from those flowing from expected profit maximization. For example, fixed costs affect output decisions. One objection to this model asserts that firms with strictly concave (or strictly convex) utility functions will not survive when confronted with risk-neutral competitors. In the long run, the average profits of the risk-averse firms will be smaller than those of the risk-neutral firm. Moreover, there will be a tendency for them to disappear or be acquired by risk neutral entrepreneurs (see, e.g., Gould 1976). Furthermore, the demise of the risk-averse entrepreneur will be beneficial for stockholders who will want each firm in their diversified portfolio to maximize expected profits. On the other hand, if there are perfect markets, then Fisher separation obtains and the firm's goal is profit maximization which, empirically, is indistinguishable from a linear utility function. (The empirical work on efficient markets starts with the assumption of perfect capital markets.) This problem continues to be the subject of theoretical inquiry (see Radner 1974).


Journal Article•DOI•
TL;DR: De Vany as mentioned in this paper models the monopoly market as a The behavior of a duopoly is examined in terms of a stochastic model in which firms have three control parameters: price, capacity, and an estimate of quality.
Abstract: The markets represented by the textbook definitions of "monopoly," "oligopoly," and "monopolistic competition" tend to merge into one another in real life. Customers (the demand side of the market) are constrained by technological requirements, or by the conviction that one supplier's product is superior to that of another, but only up to a certain point. They may, for several reasons, take their business elsewhere or even change their purchasing requirements permanently, even though the market structure for the product or component is not characterized as being competitive. Hotelling (1929), Stigler (1968), and others have discussed the role of nonprice factors in the decisions of firms and customers. More recently, De Vany (1976) has treated the role of service quality in a monopoly by defining the service offered by the firm to include the customer's cost of waiting for the product. "Monopoly" in this model is not absolute: a customer may cancel an order if, on arrival, he finds the expected delay too long. In the short run it makes no difference to the monopolist whether the cancelling customer has a substitute service or whether he leaves the market altogether. De Vany models the monopoly market as a The behavior of a duopoly is examined in terms of a stochastic model in which firms have three control parameters: price, capacity, and an estimate of quality. Customers choose a source firm and, if their estimate of delay at source is too large, they switch ("jockey") to the alternate source. Arrivals and production are random. For a given set of control variables the model computes market shares. Given the form of the cost functions of the firms, one can compute the optimal values of the parameters in the short run and in the long run.