scispace - formally typeset
Search or ask a question

Showing papers in "Management Science in 1988"


Journal ArticleDOI
TL;DR: In this paper, three dimensions of causal structure are considered-causal agency, logical structure, and level of analysis-theorists' assumptions about the nature and direction of causal influence.
Abstract: This article concerns theories about why and how information technology affects organizational life. Good theory guides research, which, when applied, increases the likelihood that information technology will be employed with desirable consequences for users, organizations, and other interested parties. But what is a good theory? Theories are often evaluated in terms of their content-the specific concepts used and the human values served. This article examines theories in terms of their structures-theorists' assumptions about the nature and direction of causal influence. Three dimensions of causal structure are considered-causal agency, logical structure, and level of analysis. Causal agency refers to beliefs about the nature of causality: whether external forces cause change, whether people act purposefully to accomplish intended objectives, or whether changes emerge unpredictably from the interaction of people and events. Logical structure refers to the temporal aspect of theory-static versus dynamic-and to the logical relationships between the "causes" and the outcomes. Level of analysis refers to the entities about which the theory poses concepts and relationships-individuals, groups, organizations, and society. While there are many possible structures for good theory about the role of information technology in organizational change, only a few of these structures can be seen in current theorizing. Increased awareness of the options, open discussion of their advantages and disadvantages, and explicit characterization of future theoretical statements in terms of the dimensions and categories discussed here should, we believe, promote the development of better theory.

2,277 citations


Journal ArticleDOI
TL;DR: An approximation method for solving the minimum makespan problem of job shop scheduling by sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced.
Abstract: We describe an approximation method for solving the minimum makespan problem of job shop scheduling. It sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced. Every time after a new machine is sequenced, all previously established sequences are locally reoptimized. Both the bottleneck identification and the local reoptimization procedures are based on repeatedly solving certain one-machine scheduling problems. Besides this straight version of the Shifting Bottleneck Procedure, we have also implemented a version that applies the procedure to the nodes of a partial search tree. Computational testing shows that our approach yields consistently better results than other procedures discussed in the literature. A high point of our computational testing occurred when the enumerative version of the Shifting Bottleneck Procedure found in a little over five minutes an optimal schedule to a notorious ten machines/ten jobs problem on which many algorithms have been run for hours without finding an optimal solution.

1,579 citations


Journal ArticleDOI
TL;DR: In this article, a set of hypotheses induced from a field investigation of four microcomputer firms, where they studied how each of the top management teams went about making major decisions, were found to be paradoxes which the successful firms resolve and the unsuccessful firms do not.
Abstract: How do executives make strategic decisions in industries where the rate of technological and competitive change is so extreme that market information is often unavailable or obsolete, where strategic windows are opening and shutting quickly, and where the cost of error is involuntary exit? How do top management teams divide the decision making responsibility? And how is risk of strategic error mitigated? What we report here is a set of hypotheses induced from a field investigation of four microcomputer firms, where we studied how each of the top management teams went about making major decisions. Our goal was to extend prior work on strategic decision making to what we term high velocity environments. Our results consist of a set of paradoxes which the successful firms resolve and the unsuccessful firms do not. We found an imperative to make major decisions carefully, but to decide quickly; to have a powerful, decisive CEO and a simultaneously powerful top management team; to seek risk and innovation, but...

1,479 citations


Journal ArticleDOI
TL;DR: In the test, lead users were successfully identified and proved to have unique and useful data regarding both new product needs and solutions responsive to those needs and new product concepts generated on the basis of lead user data were found to be strongly preferred by a representative sample of PC-CAD users.
Abstract: Recently, a "lead user" concept has been proposed for new product development in fields subject to rapid change von Hippel [von Hippel, E. 1986. Lead users: A source of novel product concepts. Management Sci.32 791-805.]. In this paper we integrate market research within this lead user methodology and report a test of it in the rapidly evolving field of computer-aided systems for the design of printed circuit boards PC-CAD. In the test, lead users were successfully identified and proved to have unique and useful data regarding both new product needs and solutions responsive to those needs. New product concepts generated on the basis of lead user data were found to be strongly preferred by a representative sample of PC-CAD users. We discuss strengths and weaknesses of this first empirical test of the lead user methodology, and suggest directions for future research.

1,145 citations


Journal ArticleDOI
TL;DR: In this article, the authors suggest that the managerial influence is not equally perceived by all subordinates, but rather, certain context-specific characteristics of individual employees mediate the manager influence.
Abstract: In the implementation of an organizational innovation, managers are usually presumed to influence the extent to which the innovation is adopted and used by their subordinates. However, the findings presented in this paper suggest that the managerial influence is not equally perceived by all subordinates. Rather, certain context-specific characteristics of individual employees mediate the managerial influence. Users of the expert system studied herein who were low in personal innovativeness toward this class of innovations, for whom the subjective importance of the task being computerized was low, whose task-related skills were low or who were low performers in their sales job-all these user groups perceived their management had encouraged them to adopt. In contrast, users who rated high on any of these measures did not perceive any management influence in their adoption decision. Moreover, although access to the innovation was in fact highly similar for all users, high performers also were inclined to perceive the system as more accessible than were low performers. These findings suggest that the diffusion of an innovation within an organization perhaps could be viewed as a two-step managerial process. Employees whose characteristics incline them to adopt an innovation will do so without management support or urging if it is simply made available. Employees low on these characteristics will await a managerial directive before adopting. Implications for future research are discussed.

759 citations


Journal ArticleDOI
TL;DR: This work discusses the importance of maintaining this distinction in the face of attempts to compromise, and provides new clarity to the conversation between decision-maker and analyst, allowing representations that are both easily understandable and mathematically consistent.
Abstract: Decision analysis stands on a foundation of hundreds of years of philosophical and practical thought about uncertainty and decision-making. The accomplishments and promise of the field are impressive, yet it has not become commonplace even in very important decisions. While human nature may pose an ultimate limitation, maintaining clarity of concept and exploiting progress in the realms of scope, skill, and efficiency should lead to more widespread use. A central conceptual distinction is that between normative and descriptive views of decision-making. We discuss the importance of maintaining this distinction in the face of attempts to compromise. The procedures for formulating, eliciting, evaluating, and appraising the decision problem are all experiencing major improvements. The strategy-generation table helps in finding creative alternatives. Decision quality concepts permit us to assure both effectiveness and efficiency in analyzing decision problems. The influence diagram provides new clarity to the conversation between decision-maker and analyst, allowing representations that are both easily understandable and mathematically consistent. The clarity test makes sure we know what we are talking about regardless of what we are saying about it. Direct and indirect values illuminate preferences. Generic risk attitude considerations indicate how to relate corporate risk tolerance to the financial measures of the corporation. Spreadsheet, decision tree, and influence diagram programs speed evaluation. Intelligent decision systems realized in computers offer promise of providing the benefits of decision analysis on a broader scale than ever before. Decision analysis is now poised for a breakthrough in its usefulness to human beings.

704 citations


Journal ArticleDOI
TL;DR: Kahneman and Tversky as mentioned in this paper demonstrate that when people choose between immediate and delayed consumption, the reference point used to evaluate alternatives can significantly influence choice, and they demonstrate the applicability of reference point concept to intertemporal choice.
Abstract: Recent research has demonstrated that choices between gambles are systematically influenced by the way they are expressed. Kahneman and Tversky's Prospect Theory Kahneman, D., A. Tversky. 1979. Prospect theory: An analysis of decision under risk. Econometrica472 363-391. explains many of these "framing" effects as shifts in the point of reference from which prospects are evaluated. This paper demonstrates the applicability of the reference point concept to intertemporal choice. Three experiments demonstrate that when people choose between immediate and delayed consumption, the reference point used to evaluate alternatives can significantly influence choice. The first study elicited relative preference for immediate and delayed consumption using three methods, each of which differently framed choices between alternatives offering identical end-state consumption. The conventional discounted utility model predicts that the three methods of elicitation should yield similar estimates of time preference, but preferences were found to differ in accordance with a reference point model. The second and third studies extend and replicate the results from the first, the third using real rather than hypothetical choices.

610 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the effectiveness of decision support systems (DSS) with and without a DSS over an eight-week period and found that the DSS-aided decision makers made significantly more effective decisions in the business simulation game than their non-DSS counterparts.
Abstract: Despite the increasing popularity of decision support systems DSS, effectiveness of such systems remains unproven. Past research claiming usefulness of the DSS has relied largely on anecdotal or case data. The relatively few laboratory experiments report mixed results regarding the effects of a decision aid. This study reviews the results of prior investigations and examines the effectiveness of DSS-aided decision makers relative to decision makers without a DSS over an eight-week period. An executive decision making game was used in two sections of a business strategy course. Three-person teams in one section used a DSS while the teams in the other section played the game without such an aid. Various measures of decision quality were recorded. Overall, the groups with access to the DSS made significantly more effective decisions in the business simulation game than their non-DSS counterparts. The DSS groups took more time to make their decisions than the non-DSS groups at the beginning of the experiment. However, the decision times converged after the third week. The DSS teams reported investigating more alternatives and exhibited a higher confidence level in their decisions than the non-DSS groups, but these differences were not statistically significant.

450 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a brand choice model to aid in the pre-launch management of a new consumer durable entry in an existing category, which contributes to theory by integrating the critical phenomena of multiattribute preference, risk, and dynamics in an individual level expected utility framework.
Abstract: This paper proposes a brand choice model to aid in the prelaunch management of a new consumer durable entry in an existing category. The model contributes to theory by integrating the critical phenomena of multiattribute preference, risk, and dynamics in an individual level expected utility framework. The integration is based on established theoretical constructs in utility, Bayesian decision analysis, and discrete choice theory. Measurement and estimation procedures are presented, an application is described, and the managerial relevance of this work as a planning and forecasting tool is examined.

335 citations


Journal ArticleDOI
TL;DR: The results of experiments designed to assess the effectiveness of an inductive algorithm in discovering predictive knowledge structures in financial data show that for all cases tested, the inductively produced knowledge structures perform better than the competing models.
Abstract: With rapidly growing interest in the development of knowledge-based computer consulting systems for various problem domains, the difficulties associated with knowledge acquisition have special importance. This paper reports on the results of experiments designed to assess the effectiveness of an inductive algorithm in discovering predictive knowledge structures in financial data. The quality of the results are evaluated by comparing them to results generated by discriminant analysis, individual judgments, and group judgments. A partial intersection of predictive attributes occurs. More importantly, for all cases tested, the inductively produced knowledge structures perform better than the competing models.

299 citations


Journal ArticleDOI
TL;DR: The first comprehensive empirical investigation of the differences between Japan and the United States in innovation cost and time was carried out by as discussed by the authors, who found that Japanese firms allocate their resources quite differently than do American firms, with a larger percentage of total innovation cost being devoted to tooling and manufacturing equipment and facilities, and a smaller percentage devoted to marketing startup.
Abstract: This study, based on detailed data obtained from carefully selected samples of about 200 Japanese and American firms, seems to be the first comprehensive empirical investigation of the differences between Japan and the United States in innovation cost and time. Whereas the Japanese have substantial advantages in this regard in some industries notably machinery, they do not seem to have any substantial advantage in others notably chemicals. Whereas they have great advantages in carrying out innovations based on external technology, they do not seem to have any in carrying out innovations based on internal technology. Japanese firms allocate their resources quite differently than do American firms, a larger percentage of total innovation cost being devoted to tooling and manufacturing equipment and facilities, a smaller percentage being devoted to marketing startup. A large part of America's problem in this regard seems to be due to its apparent inability to match Japan as a quick and effective user of external technology.

Journal ArticleDOI
TL;DR: A stochastic-demand version of the single-stage lot-sizing problem with time-varying demand is formulated, incorporating a service-level constraint on the probability of a stockout and the static uncertainty strategy is shown to be the most straightforward to modify and "roll along" as new demands become known.
Abstract: We formulate a stochastic-demand version of the single-stage lot-sizing problem with time-varying demand, incorporating a service-level constraint on the probability of a stockout. Three strategies are studied. The "static uncertainty" strategy, in which lot-sizing decisions for every period must be made at the beginning of period 1, is shown to yield an equivalent deterministic problem with time-varying demands for which optimal or good heuristic solutions exist. The procedure by which this equivalent problem is obtained is computationally simple. The "dynamic uncertainty" strategy allows subsequent lot sizes to be chosen on the basis of demands that have become known at a later point in time. The "static-dynamic" uncertainty approach combines features of the above two strategies and yields an equivalent linear program for any given order schedule. Relationships are suggested between these strategies and various aspects of rolling horizon production planning. Arguments are given that in such an environment, the static uncertainty strategy is the most straightforward to modify and "roll along" as new demands become known. Good results are found when this procedure is applied to some 300-period stochastic-demand problems using rolling horizons of between 2 and 12 periods in length.

Journal ArticleDOI
TL;DR: In this article, an approximate model of an inventory control system in which there exist two options for resupply, with one having a shorter lead time is developed, and a procedure for determining the policy parameters is given.
Abstract: In this paper we develop an approximate model of an inventory control system in which there exist two options for resupply, with one having a shorter lead time. Because the optimal policy appears to be extremely complex, we consider a reasonable extension of the standard Q, R policy to allow for two different lot sizes Q1 and Q2, and two different reorder levels, R1 and R2. Expressions for the expected on hand inventory and the expected backorders are developed and a procedure for determining the policy parameters is given. The model is validated by simulation, and calculations are included which compare the average annual cost with and without emergency ordering.

Journal ArticleDOI
TL;DR: In this article, the effect of commonality on safety stocks in a simple inventory model was investigated and the results were not all intuitive, while utilizing commonality is beneficial, nothing general can be said about the resulting change in the components' stock levels.
Abstract: This paper extends recent results of Baker et al. Baker, K. R., M. J. Magazine, H. L. W. Nuttle. 1986. The effect of commonality on safety stocks in a simple inventory model. Management Sci.32 982-988. in understanding the impact of component commonality on stocking levels under service level constraints. A model is formulated for an arbitrary number of products with general joint demand distribution. The results obtained are not all intuitive. While utilizing commonality is beneficial, nothing general can be said about the resulting change in the components' stock levels. When the cost structure is of a particular simple form, though, some interesting general patterns do emerge. We also discuss the case of using a service-level measure where rationing of common components might be required, and characterize the implied rationing rule.

Journal ArticleDOI
TL;DR: Monahan as mentioned in this paper adapted the quantity discount model of inventory theory to the problem of determining an optimal quantity discount schedule from a vendor's point of view, and opened up an important direction of research.
Abstract: Monahan Monahan, J. P. 1984. A quantity discount pricing model to increase vendor profits. Management Sci. June 720-726. adapted the quantity discount model of inventory theory to the problem of determining an optimal quantity discount schedule from a vendor's point of view, and opened up an important direction of research. However, his one-item, one-customer, one-vendor model is based on several implicit assumptions that must be judged unreasonable. Monahan must account for the vendor's inventory carrying charges and redefine his variable S2. It is shown that a rational vendor's manufacturing frequency would not be identical to the buyer's ordering frequency if the vendor's manufacturing setup costs are substantially larger than the buyer's ordering costs. A numerical example presented in this note also questions the practical usefulness of Monahan's model even after its theoretical inaccuracies axe corrected. Monahan's model may explain discounts that are a fraction of 1% of the price of an item, but it fails to explain commonly observed magnitudes of quantity discounts, such as 10% of the unit price.

Journal ArticleDOI
TL;DR: In this paper, the problem of setting safety stock when both the demand in a period and the lead time are random variables is considered, and a correct procedure for setting safety stocks based on these two inputs is given for two popular demand models.
Abstract: We consider the problem of setting safety stock when both the demand in a period and the lead time are random variables. There are two cases to consider. In the first case the parameters of the demand and lead time distributions are known; in the second case they are unknown and must be estimated. For the case of known parameters a standard procedure is presented in the literature. In this paper, examples are used to show that this procedure can yield results that are far from the desired result. A correct procedure is presented. When the parameters are unknown, it is assumed that a simple exponential smoothing model is used to generate estimates of demand in each period and that a discrete distribution of the lead time can be developed from historical data. A correct procedure for setting safety stocks that is based on these two inputs is given for two popular demand models. The approach is easily generalized to other models of demand. Safety stock calculation is simplified when certain normality assumptions are valid. Simulation results in the Appendix indicate when these assumptions about normality are reasonable.

Journal ArticleDOI
TL;DR: This paper develops the foundations of a theory of managerial problem solving and analyzes the concept of "problem structure" and proposes a conceptualization satisfying relevant criteria.
Abstract: Management science is concerned with understanding and improving action-oriented managerial thought. While it has addressed this concern through decision making research, there is an alternative conceptualization—the problem solving paradigm—that appears to offer a valuable complementary perspective. This paper develops the foundations of a theory of managerial problem solving. It analyzes the concept of “problem structure” and proposes a conceptualization satisfying relevant criteria. The concept is used in the development of an informal theory of problem structuring. Several structuring methodologies are assessed and an approach pertinent to real world, managerial problems is proposed.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the sensitivity of mean system time of a customer to a parameter of the arrival or service distribution and showed analytically that the steady state value of the perturbation analysis estimate of this sensitivity is unbiased.
Abstract: The technique of perturbation analysis has recently been introduced as an efficient way to compute parameter sensitivities for discrete event systems. Thus far, the statistical properties of perturbation analysis have been validated mainly through experiments. This paper considers, for an M/G/1 queueing system, the sensitivity of mean system time of a customer to a parameter of the arrival or service distribution. It shows analytically that (i) the steady state value of the perturbation analysis estimate of this sensitivity is unbiased, and (ii) a perturbation analysis algorithm implemented on a single sample path of the system gives asymptotically unbiased and strongly consistent estimates of this sensitivity. (No previous knowledge of perturbation analysis is assumed, so the paper also serves to introduce this technique to the unfamiliar reader.) Numerical extensions to GI/G/1 queues, and applications to optimization problems, are also illustrated.

Journal ArticleDOI
TL;DR: In this paper, a goal programming/constrained regression model was used to compare the main findings of the econometric studies in every one of the 20 years covered, and the results showed that the results of these studies did not support the contention that Bell was a natural monopoly.
Abstract: The recently implemented court decision to break up Bell (=American Telephone & Telegraph Co.) to accord with U.S. anti-trust laws represents a highly significant policy decision which is proving to be influential in other countries as well as the U.S. The telecommunication industry is of such size and importance that even relatively small economies that might be lost with Bell's breakup as a “natural monopoly” could involve substantial welfare losses to consumers and producers. Studies commissioned by the U.S. Justice Department that approached this topic by econometric methods reported that the evidence failed to support the contention that Bell was a natural monopoly. Here a goal programming/constrained regression, as developed in the Management Science literature, uses the same functional form and the same data but nevertheless reverses the main findings of the econometric studies in every one of the 20 years covered. This kind of difference in results obtained by two different methods of analysis poi...

Journal ArticleDOI
TL;DR: A new algorithm for optimally balancing assembly lines is formulated and tested, which obtains proven optimal solutions for ten 1000 task lines, which each possess the computationally favorable conditions of an average of at least 6 tasks per work station and a small number of between-task precedence requirements.
Abstract: A new algorithm for optimally balancing assembly lines is formulated and tested. Named "FABLE," it obtains proven optimal solutions for ten 1000 task lines, which each possess the computationally favorable conditions of an average of at least 6 tasks per work station and a small number of between-task precedence requirements, in less than 20 seconds of IBM 3033U CPU time for each problem. FABLE also performs very favorably on a benchmark group of 64 test problems drawn from the literature, which are of up to 111 tasks each. FABLE finds and proves an optimal solution to the 64 problems in a total of 3.16 seconds of IBM 3090 CPU time. FABLE is a 'laser' type, depth-first, branch-and-bound algorithm, with logic designed for very fast achievement of feasibility, ensuring a feasible solution to any line of 1000 or even more tasks. It utilizes new and existing dominance rules and bound arguments. A total of 549 problems of various characteristics are solved to determine conditions under which FABLE performs most and least favorably. Performance is sensitive to average number of tasks per work station, number of between-task precedence requirements measured by 'order strength', and the total number of tasks per problem. A heuristic variant of FABLE is also described.

Journal ArticleDOI
TL;DR: This paper presents a model of an s, S inventory system in which there are two priority classes of customers and an approximate, renewal-based model is derived which develops a greedy heuristic which minimizes expected costs subject to a fill-rate service constraint.
Abstract: This paper presents a model of an s, S inventory system in which there are two priority classes of customers. The model treats excess demands as lost sales and can accommodate an arbitrary deterministic lead time. After considering the associated Markov-chain model, an approximate, renewal-based model is derived. This approximation is used to develop a greedy heuristic which minimizes expected costs subject to a fill-rate service constraint. The paper concludes with the results of an extensive numerical test of both the accuracy of the approximation and the performance of the heuristic with respect to the true optimal solution. Results indicate good performance which deteriorates as the fill rate requirement and lead time increase.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on a single order cycle of a warehouse, serving N retailers where the only shipments allowed during the cycle are from the warehouse to the retailers, and develop both the exact cost model and a computationally tractable approximate cost model for the case of identical retailers.
Abstract: In this paper, we focus on a single order cycle of a warehouse, serving N retailers where the only shipments allowed during the cycle are from the warehouse to the retailers. For a simple ship-up-to-S allocation policy, we develop both the exact cost model and a computationally tractable approximate cost model for the case of identical retailers, and demonstrate empirically the benefits of centralizing at least a portion of the total system stock.

Journal ArticleDOI
TL;DR: This article examined how weights in multi-attribute utility measurement change when objectives are split into more detailed levels and found that the more detailed parts of the value tree were weighted significantly higher than the less detailed ones.
Abstract: This study examined how weights in multiattribute utility measurement change when objectives are split into more detailed levels. Subjects were asked to weight attributes in value trees containing three objectives which were specified by either three, four, five, or six attributes. The robust finding was that the more detailed parts of the value tree were weighted significantly higher than the less detailed ones. This overweighting bias was found for several weighting techniques, but the techniques that used holistic judgments to derive weights were affected somewhat less than techniques that used decomposed attribute weights. This bias is interpreted in terms of the increased salience and availability of attributes that are spelled out in more detail.

Journal ArticleDOI
TL;DR: Infinitesimal perturbation analysis (IPA) as discussed by the authors is a method for computing a sample path derivative with respect to an input parameter in a discrete event simulation, which is based on the fact that for certain parameters and any realization of a simulation, the change in parameter can be made small enough so that only the times of events get shifted, but their order does not change.
Abstract: Infinitesimal Perturbation Analysis IPA is a method for computing a sample path derivative with respect to an input parameter in a discrete event simulation. The IPA algorithm is based on the fact that for certain parameters and any realization of a simulation, the change in parameter can be made small enough so that only the times of events get shifted, but their order does not change. This paper considers the convergence properties of the IPA sample path derivatives. In particular, the question of when an IPA estimate converges to the derivative of a steady state performance measure is studied. Necessary and sufficient conditions for this convergence are derived for a class of regenerative processes. Although these conditions are not guaranteed to be satisfied in general, they are satisfied for the mean stationary response time in the M/G/1 queue. A necessary condition for multiple IPA estimates to simultaneously converge to the derivatives of steady state throughputs in a queueing network is determined. The implications of this necessary condition are that, except in special cases, the original IPA algorithm cannot be used to consistently estimate steady state throughput derivatives in queueing networks with multiple types of customers, state-dependent routing or blocking. Numerical studies on IPA convergence properties are also presented.

Journal ArticleDOI
TL;DR: A family of heuristics to solve combinatorial problems such as routing and partitioning that exploit geometry but ignore specific distance measures are described, which seem well-suited to operational problems where time or computing resources are limited.
Abstract: We describe a family of heuristics to solve combinatorial problems such as routing and partitioning. These heuristics exploit geometry but ignore specific distance measures. Consequently they are simple and fast, but nonetheless fairly accurate, and so seem well-suited to operational problems where time or computing resources are limited. We survey promising new application areas, and show how procedures may be customized to reflect the structure of particular applications.

Journal ArticleDOI
TL;DR: In this article, a new lower bound on the cost of the optimal policy is proposed in order that the performance of heuristics may be measured, motivated by the lower bound, a simple periodic policy was proposed and shown to be an improvement over 'can-order' policies for many data sets.
Abstract: The coordinated multi-item inventory problem refers to the problem of managing inventories where there is a joint fixed cost for replenishing plus an item-by-item fixed cost for each item included in the replenishment order. Given that the optimal solution is likely to be too complex, attention has focused on fixed heuristics, and in particular 'can-order' or s, c, S policies. This paper makes two contributions. First, a new lower bound on the cost of the optimal policy is proposed in order that the performance of heuristics may be measured. Second, motivated by the lower bound, a simple periodic policy is proposed and shown to be an improvement over 'can-order' policies for many data sets.

Journal ArticleDOI
TL;DR: In this paper, the authors deal with the determination of optimal advertising strategies for new product diffusion models, where the introduction of a new consumer durable in a monopolistic market and the evolution of sales is modelled by a flexible diffusion model.
Abstract: This paper deals with the determination of optimal advertising strategies for new product diffusion models. We consider the introduction of a new consumer durable in a monopolistic market and the evolution of sales is modelled by a flexible diffusion model. Repeat sales and possible entry of rivals are disregarded but we allow for discounting of future revenue streams and cost learning curve. Using standard methods of optimal control theory we characterize qualitatively the structure of an optimal advertising strategy for different versions of the diffusion model.

Journal ArticleDOI
David E. Bell1
TL;DR: In this article, the authors identify the small class of utility functions for which risk aversion does not hold: one-switch discount functions for cash flows, which allow at most one change in preference between cash flows as all payoffs are deferred in time.
Abstract: Consider the relative attractiveness to a decision maker of two financial gambles as the wealth of that individual varies. It may seem reasonable that either one alternative should be preferred for all wealth levels or that there exists a unique critical wealth level at which the decision maker switches from preferring one alternative to the other. Decreasing risk aversion is not sufficient for this property to hold: we identify the small class of utility functions for which it does. We show how the property leads naturally to a measure of risk. The results of this paper apply equally well to discounting functions for cash flows: one-switch discount functions permit at most one change in preference between cash flows as all payoffs are deferred in time.

Journal ArticleDOI
TL;DR: A new algorithm is presented for the optimal solution of the 0-1 Knapsack problem, which is particularly effective for large-size problems, and incorporates a new method of computation of upper bounds and efficient implementations of reduction procedures.
Abstract: We present a new algorithm for the optimal solution of the 0-1 Knapsack problem, which is particularly effective for large-size problems. The algorithm is based on determination of an appropriate small subset of items and the solution of the corresponding "core problem": from this we derive a heuristic solution for the original problem which, with high probability, can be proved to be optimal. The algorithm incorporates a new method of computation of upper bounds and efficient implementations of reduction procedures. The corresponding Fortran code is available. We report computational experiments on small-size and large-size random problems, comparing the proposed code with all those available in the literature.

Journal ArticleDOI
TL;DR: In this article, a special case of three-mode factor analysis is used to portray the systematic structure underlying asymmetric cross elasticities for a broad class of market-share attraction models.
Abstract: A special case of three-mode factor analysis is used to portray the systematic structure underlying asymmetric cross elasticities for a broad class of market-share attraction models. Analysis of the variation over retail outlets and weeks reveals competitive patterns corresponding to sales for the major brands in the market as well as patterns reflecting shelf-price competition. Analysis of the brand domain results in a joint space. One set of brand positions portrays how brands exert influence over the competition. The other set of points portrays how brands are influenced by others. The interset distances angles provide direct measures of competitive pressures. Maps are formed as spatial representations of each of the competitive patterns discovered.