scispace - formally typeset
Search or ask a question

Showing papers in "Management Science in 1990"


Journal ArticleDOI
TL;DR: In this paper, the authors examined the effect of managerial tasks with the information technology and the resulting effect on the adoption and infusion of that technology using a random sample of manufacturing firms across the United States and found that this interaction does indeed affect the adoption of MRP, though it does not seem to affect MRP infusion.
Abstract: Based on the innovation and technological diffusion literatures, promising research questions concerning the implementation of a production and inventory control information system material requirements planning: MRP are identified and empirically examined. These questions focus on the interaction of managerial tasks with the information technology and the resulting effect on the adoption and infusion of that technology. Using a random sample of manufacturing firms across the United States, we find that this interaction does indeed affect the adoption of MRP, though it does not seem to affect MRP infusion. These results support the notion that though rational decision models may be useful in explaining information technology adoption, political and learning models may be more useful when examining infusion.

2,884 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider how prior outcomes are combined with the potential payoffs offered by current choices, and propose an editing rule to describe how decision makers frame such problems, and also present data from real money experiments supporting a "house money effect" (increased risk seeking in the presence of a prior gain) and "break-even effects" (that outcomes which offer a chance to break even are especially attractive).
Abstract: How is risk-taking affected by prior gains and losses? While normative theory implores decision makers to only consider incremental outcomes, real decision makers are influenced by prior outcomes. We first consider how prior outcomes are combined with the potential payoffs offered by current choices. We propose an editing rule to describe how decision makers frame such problems. We also present data from real money experiments supporting a “house money effect” (increased risk seeking in the presence of a prior gain) and “break-even effects” (in the presence of prior losses, outcomes which offer a chance to break even are especially attractive).

1,720 citations


Journal ArticleDOI
TL;DR: A meta-analysis of results from 320 published studies relates environmental, strategic and organizational factors to financial performance as discussed by the authors, finding that concentration and growth have a relatively consistent positive impact on performance.
Abstract: A meta-analysis of results from 320 published studies relates environmental, strategic and organizational factors to financial performance. Some factors e.g., concentration and growth have been studied widely and have a relatively consistent positive impact on performance. Other widely-studied factors e.g., size have few consistent effects. Many factors particularly organizational variables are understudied. We suggest implications for research and management practice.

1,190 citations


Journal ArticleDOI
TL;DR: This article examined the persistence of learning within organizations and the transfer of learning across organizations on data collected from multiple organizations and found that knowledge acquired through production depreciates rapidly and the conventional measure of learning, cumulative output, significantly overstates the persistent learning.
Abstract: The persistence of learning within organizations and the transfer of learning across organizations are examined on data collected from multiple organizations. Results indicate that knowledge acquired through production depreciates rapidly. The conventional measure of learning, cumulative output, significantly overstates the persistence of learning. There is some evidence that learning transfers across organizations: organizations beginning production later are more productive than those with early start dates. Once organizations begin production, however, they do not appear to benefit from learning in other organizations. The implications of the results for a theory of organizational learning are discussed. Managerial implications are described.

1,055 citations


Journal ArticleDOI
TL;DR: The analytic hierarchy process AHP is flawed as a procedure for ranking alternatives in that the rankings produced by this procedure are arbitrary as discussed by the authors, and the key to correcting this flaw is the synthesis of the AHP with the concepts of multiattribute utility theory.
Abstract: The analytic hierarchy process AHP is flawed as a procedure for ranking alternatives in that the rankings produced by this procedure are arbitrary. This paper provides a brief review of several areas of operational difficulty with the AHP, and then focuses on the arbitrary rankings that occur when the principle of hierarchic composition is assumed. This principle requires that the weights on the higher levels of a hierarchy can be determined independently of the weights on the lower levels. Virtually all of the published examples of the use of the AHP to evaluate alternatives relative to a set of criteria have assumed this principle. The key to correcting this flaw is the synthesis of the AHP with the concepts of multiattribute utility theory.

1,030 citations


Journal ArticleDOI
TL;DR: In this article, the effects of anonymity and evaluative tone on computer-mediated groups using a group decision support system to perform an idea-generation task were evaluated in a laboratory experiment.
Abstract: A laboratory experiment was used to evaluate the effects of anonymity and evaluative tone on computer-mediated groups using a group decision support system to perform an idea-generation task. Evaluative tone was manipulated through a confederate group member who entered supportive or critical comments into the automated brainstorming system. Groups working anonymously and with a critical confederate produced the greatest number of original solutions and overall comments, yet average solution quality per item and average solution rarity were not different across conditions. Identified groups working with a supportive confederate were the most satisfied and had the highest levels of perceived effectiveness, but produced the fewest original solutions and overall comments.

902 citations


Journal ArticleDOI
TL;DR: In this paper, the authors report on the culmination of a four year study of high technology product innovation and identify the critical organizational subunits, development activities and communication channels that influence product outcome, as well as, external factors such as characteristics of the product and the competitive environment.
Abstract: This paper reports on the culmination of a four year study of high technology product innovation. During the course of this research, we examined over 330 new products in the electronics industry in order to better understand the factors that differentiated successful from unsuccessful product development efforts. This paper presents the conclusions from the final phase of the research. In this study we empirically test a model of product development that incorporates our findings from the earlier exploratory survey and case study phases of our research. The model identifies the critical organizational subunits, development activities and communication channels that influence product outcome, as well as, external factors such as characteristics of the product and the competitive environment. Our findings suggest the following key factors affect product outcome: 1 the quality of the R&D organization, 2 the technical performance of the product, 3 the product's value to the customer, 4 the synergy of the new product with the firm's existing competences, and 5 management support during the product development and introduction processes. Also important but less significant were the 6 competence of the marketing and manufacturing organizations and market factors, such as the 7 competitiveness and the 8 size and rate of growth of the target market.

863 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a review of the use of the user satisfaction construct as a measure of information systems effectiveness and propose a discussion of attitude structures and function in information systems.
Abstract: For nearly two decades, the user-satisfaction construct has occupied a central role in behavioral research in Information Systems IS. In industry, the construct has often been used as a surrogate for IS effectiveness. Given its widespread use by both academics and practitioners, it is surprising that no comprehensive theoretical assessment of this construct has been performed. This paper provides such a review. It begins by examining conceptual and theoretical limitations of the construct's use as a measure of IS effectiveness. Attention is then focused on the evolution of the construct in the literature and the theoretical problems associated with its broader use. The fundamental similarity between user satisfaction and the social and cognitive psychologists' notion of an attitude is suggested. The next sections focus on a discussion of attitude structures and function. First, alternative theoretical views on attitude structure are presented. While one of these structures, the family of expectancy-value models, is reflected in current research on user satisfaction, the second, the family of cognitive approaches, is not. The two attitude structures are considered from the perspective of possible refinements to future work in IS. Next, an examination is made of the ways in which these structures have been integrated in terms of understanding the relationship of users' affective responses to other responses i.e., behavior or cognition. This leads to a discussion of the function attitudes might serve for the user other than the evaluation of an information system or IS staff. Finally, the question of how behavior influences attitude is considered. The paper concludes with suggestions for future work.

766 citations


Journal ArticleDOI
TL;DR: The Analytic Hierarchy Process (AHP) as mentioned in this paper is a theory of measurement that is applied in decision making to describe the general decision operation by decomposing a complex problem into a multi-level hierarchic structure of objectives, criteria, subcriteria and alternatives.
Abstract: It is a fact that people make decisions and have been making decisions for a very long time. Contrary to what some of us who are interested in decision-making may like to believe, most people do not take seriously the existence of theories which purport to set their thinking and feeling right. They claim to know their own value system and what they want. They may wonder how anyone else can know well enough to tell them how best to organize their thinking in order to make better choices. Yet, research has shown that complex decisions are beyond the capacity of the brain to synthesize intuitively and efficiently. Since decision making is a natural characteristic of people, how do we describe what they do so that an ordinary mortal can understand what we are saying? We do not wish to legislate the method with which people should make decisions, but only to describe it even when it is prescribed by some method. In the process, we may learn things that can help people make better decisions. How? The Analytic Hierarchy Process (AHP) (Forman et al., Harker 1986, Harker and Vargas 1987, Saaty 1986, 1988a, b, Saaty and Vargas 1987, Xu 1988, Golden et al. 1989, Saaty and Alexander 1989) is a theory of measurement. When applied in decision making it assists one to describe the general decision operation by decomposing a complex problem into a multi-level hierarchic structure of objectives, criteria, subcriteria and alternatives. The AHP provides a fundamental scale of relative magnitudes expressed in dominance units to represent judgments in the form of paired comparisons. A ratio scale of relative magnitudes expressed in priority units is then derived from each set of comparisons. An overall rcatio scale of priorities is then synthesized to obtain a ranking of the alternatives. From its axioms to its procedures, the AHP has turned out to be historically and theoretically a different and independent theory of decision making from utility theory. Much as a dialogue evolved in mathematics around the consistency of different geometries and around absolute and relative space and time in physics, both to dispel absolute notions, those who believe that only utility theory can tell us the absolute truth about man's decision-making might take a close look at the AHP. It has found varied and serious applications. It also has a particular way of generating ratio scales and dealing with inconsistency in judgment that have contributed to its effectiveness in resource allocation and in the setting of priorities by a group of decision makers. Utility theory is a normative process. The AHP as a descriptive theory encompasses procedures leading to outcomes as would be ranked by a normative theory. But it must go beyond to deal with outcomes not accounted for by the demanding assumptions of a normative theory. We must briefly describe the AHP to enable the reader to see that a practicable theory based on ratio scales need not dilute itself to satisfy expectations of people who derive their understanding from a theory based on interval scales. This is particularly true if the rival theory, in aspiring for generality, also makes unrealistic assumptions, for example about the transitivity and consistency of preferences and the difficult use of lotteries,

721 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the market benefits and cost disadvantages of broader product lines on a large sample of over 1,400 business units and find significant market share benefits and increases in firms' profitability with broad product lines; moreover, widely held beliefs of increases in production costs are not empirically supported.
Abstract: Strategic product line breadth decisions evoke differential responses from the manufacturing and the marketing areas: manufacturing prefers keeping process disruptions to a minimum and, as a result, discourages product proliferation; however, marketing, in its attempt to match products to heterogeneous consumer needs and gain market share, emphasizes a broader product line. We systematically investigate the market benefits and cost disadvantages of broader product lines on a large sample of over 1,400 business units. Our results indicate significant market share benefits and increases in firms' profitability with broader product lines; moreover, widely held beliefs of increases in production costs are not empirically supported. American manufacturing firms may indeed be flexible enough to accommodate product variety without significant detrimental effects on costs.

583 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the desirability and implications of new venture financing within a principal-agent framework that captures the essence of the relationship between entrepreneurs and venture capitalists.
Abstract: A number of issues that relate to the desirability and implications of new venture financing are examined within a principal-agent framework that captures the essence of the relationship between entrepreneurs and venture capitalists. The model suggests: 1 As long as the skill levels of entrepreneurs are common knowledge, all will choose to involve venture capital investors, since the risk sharing provided by outside participation dominates the agency relationship that is created. 2 The less able entrepreneurs will choose to involve venture capitalists, whereas the more profitable ventures will be developed without external participation because of the adverse selection problem associated with asymmetric information. 3 If a costly signal is available that conveys the entrepreneur's ability, some entrepreneurs will invest in such a signal and then sell to investors; these entrepreneurs, however, need not be the more able ones. The implications for new venture financing of these and other findings are discussed and illustrated by example.

Journal ArticleDOI
TL;DR: More than 500 top-level business executives were studied to ascertain the validity of common stereotypes of who takes risks and who avoids risks as discussed by the authors, and the results were surprisingly clearcut.
Abstract: More than 500 top-level business executives were studied to ascertain the validity of common stereotypes of who takes risks and who avoids risks. We began with 13 risk measures based on theoretical grounds, naturally occurring situations, and attitudes. These measures were formed into seven consolidated measures using factor analysis. Data were gathered on numerous socio-economic variables including ones relating to personal, financial, and professional characteristics. When these characteristics were subjected to factor analysis, four main factors emerged. Linear discriminant analysis was used to address the question of whether risk takers can be differentiated from risk averters. The results were surprisingly clearcut. The most successful executives were the biggest risk takers; the most mature executives were the most risk averse.

Journal ArticleDOI
TL;DR: In this article, the authors present a model and an analysis of the cost-flexibility tradeoffs involved in investing in product-flexible manufacturing capacity, which provides a firm with the ability to...
Abstract: This paper presents a model and an analysis of the cost-flexibility tradeoffs involved in investing in product-flexible manufacturing capacity. Flexible capacity provides a firm with the ability to...

Journal ArticleDOI
TL;DR: In this article, a subgame perfect Nash equilibrium pricing policy is characterized and shown to involve intertemporal price discrimination, and the authors compare this policy to the optimal policy for a monopolist facing myopic consumers.
Abstract: This paper considers the intertemporal pricing problem for a monopolist marketing a new product. The key feature differentiating this paper from the extant management science literature on intertemporal pricing is the assumption that consumers are intertemporal utility maximizers. A subgame perfect Nash equilibrium pricing policy is characterized and shown to involve intertemporal price discrimination. We compare this policy to the optimal policy for a monopolist facing myopic consumers and find that for any given state, prices are always lower with rational consumers than with myopic consumers. For plausible parameter values the assumption of consumer rationality can be shown to lead to large differences in optimal prices. Moreover, if a monopolist facing rational consumers implements the optimal myopic consumer pricing policy, profits can be significantly less than if the monopolist follows the equilibrium pricing policy for rational consumers.

Journal ArticleDOI
TL;DR: In this paper, the authors present a set of propositions about the timing of new product entry and empirically test the relationship between the market-entry time and the likelihood of success for new industrial products.
Abstract: In a dynamic, competitive environment, the decision to enter the market should be timed to balance the risks of premature entry against the missed opportunity of late entry. Previous research has mainly focused on the strategic aspects of the entry-time decision. In this paper we review the literature and develop a set of propositions about the timing of new product entry. Then we empirically test the relationship between the market-entry time and the likelihood of success for new industrial products.

Journal ArticleDOI
TL;DR: A model of the innovation diffusion process is developed using a micromodeling approach that explicitly considers the determinants of adoption at the individual level in a decision analytic framework, and incorporates heterogeneity in the population with respect to initial perceptions, preference characteristics, and responsiveness to information.
Abstract: A model of the innovation diffusion process is developed using a micromodeling approach that explicitly considers the determinants of adoption at the individual level in a decision analytic framework, and incorporates heterogeneity in the population with respect to initial perceptions, preference characteristics, and responsiveness to information. The micromodelling approach provides a behavioral basis for explaining adoption at the disaggregate level and the consequent pattern of diffusion at the aggregate level. The analytical implications of the model are compared and contrasted with the traditional, aggregate-level, diffusion models. An advantage of our approach is its micro-theory driven flexibility in accommodating various patterns of diffusion. Examples are provided of conditions under which the model yields diffusion patterns identical to those of some well-known aggregate models. A pilot study is reported, outlining procedures for data collection and estimation of the individual-level parameters, and providing a preliminary test of the predictive performance of the model. Measurement of the individual parameters prior to product launch enables potential applications of the model for segmentation of the target population in terms of their expected adoption times.

Journal ArticleDOI
TL;DR: The results indicate that a combination of model and manager always outperforms either of these decision inputs in isolation, an average R2 increase of 0.09 above the best single decision input in cross-validated model analyses.
Abstract: We focus on ways of combining simple database models with managerial intuition. We present a model and method for isolating managerial intuition. For five different business forecasting situations, our results indicate that a combination of model and manager always outperforms either of these decision inputs in isolation, an average R2 increase of 0.09 16% above the best single decision input in cross-validated model analyses. We assess the validity of an equal weighting heuristic, 50% model + 50% manager, and then discuss why our results might differ from previous research on expert judgment.

Journal ArticleDOI
TL;DR: In this article, the role of brand loyalty in determining optimal price promotional strategies used by firms in a competitive setting is analyzed, where loyalty is operationalized as the minimum price differential needed before consumers who prefer one brand switch to another brand.
Abstract: This paper analyzes the role played by brand loyalty in determining optimal price promotional strategies used by firms in a competitive setting. Loyalty is operationalized as the minimum price differential needed before consumers who prefer one brand switch to another brand. Our objective is to examine how loyalties toward the competing brands influence whether or not firms would use price promotions in a product category. We also examine how loyalty differences lead to variations in the depth and frequency with which price discounts are offered across brands in the same product category. The analysis predicts that a brand's likelihood of using price promotions increases with an increase in the number of competing brands in a product category. In the context of a market in which a brand with a large brand loyalty competes with a brand with a low brand loyalty, it is shown that in equilibrium, the stronger brand i.e., the brand with the larger loyalty promotes less frequently than the weaker brand. The results suggest that the weaker brand gains more from price promotions. The analysis helps us understand discounting patterns in markets where store brands, weak national brands, or newly introduced national brands compete against strong, well known, national brands. The findings are based on the unique perfect equilibrium in a finitely repeated game. The predictions of the model are compared with the data on 27 different product categories. The data are consistent with the main findings of the model.

Journal ArticleDOI
TL;DR: In this article, the single firm bundle pricing problem is viewed as a disjunctive program which is formulated as a mixed integer linear program, and a variety of cost and reservation price conditions are investigated with this approach.
Abstract: Bundle pricing is a widespread phenomenon. However, despite its importance as a pricing tool, surprisingly little is known about how to find optimal bundle prices. Most discussions in the literature are restricted to only two components, and even in this case no algorithm is given for setting prices. Here we show that the single firm bundle pricing problem is naturally viewed as a disjunctive program which is formulated as a mixed integer linear program. Multiple components, and a variety of cost and reservation price conditions are investigated with this approach. Several new economic insights on the role and effectiveness of bundling are presented. An added benefit of the solution to the bundle pricing model is the selection of products which compose the firm's product line. Computational testing is done on problems with up to 21 components over one million potential product bundles, and data collection issues are addressed.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss important friction forces that future theories of strategy must incorporate and highlight the behavioral dimension, in the belief that strategies should incorporate both the rational and suboptimal aspects of human behavior.
Abstract: This paper discusses important friction forces that future theories of strategy must incorporate. Some of these are technological and environmental; but the most important ones-it is argued-are psychological. The view is developed that strategy, at its core, concerns the development and testing of heuristics for high stake decisions in environments too unstable and complex to be optimized. The paper especially highlights the behavioral dimension, in the belief that strategies should incorporate both the rational and suboptimal aspects of human behavior. The rational approach is in many ways the easier, as there may be only one way to be right. Yet, the great variety in which people and companies can err gives strategy its creative and real-world challenge. The tension between the rational and behavioral components is what the field of strategy should seek to exploit.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a model, called venture theory, of how people assess decision weights, which is assumed that people first anchor on a stated probability and then adjust this by mentally simulating other possible values.
Abstract: Several theories suggest that people replace probabilities by decision weights when evaluating risky outcomes. This paper proposes a model, called venture theory, of how people assess decision weights. It is assumed that people first anchor on a stated probability and then adjust this by mentally simulating other possible values. The amount of mental simulation is affected by the absolute size of payoffs, the extent to which the anchor deviates from the extremes of 0 and 1, and the level of perceived ambiguity concerning the relevant probability. The net effect of the adjustment i.e., up or down vis-i-vis the anchor reflects the relative weight given in imagination to values above as opposed to below the anchor. This, in turn, is taken to be a function of both individual and situational variables, and in particular, the sign and size of payoffs. Cognitive and motivational factors therefore both play important roles in determining decision weights. Assuming that people evaluate outcomes by a prospect theory value function Kahneman and Tversky 1979 and are cautious in the face of risk, fourteen predictions are derived concerning attitudes toward risk and ambiguity as functions of different levels of payoffs and probabilities. The results of three experiments are reported. Whereas only a subset of the model's predictions can be tested in Experiment 1, all fourteen are tested in Experiments 2 and 3 using hypothetical and real payoffs, respectively. Several of the model's predictions are not supported in Experiment 2 but almost all are validated in Experiments 1 and 3. The failures relate to the exact nature of probability × payoff interactions in attitudes toward risk and ambiguity for losses. The theory and results are discussed in relation to other experimental evidence, future tests of the theory, alternative models of risky choice, and implications of venture theory for explaining further phenomena.

Journal ArticleDOI
TL;DR: In this article, the authors present a general model for aggregating votes from a preferential ballot, where the weights Wj are assumed to form a monotonically decreasing sequence with Wj-Wj + 1 ≥ dj, e. These constraints correspond to the assurance region AR side constraints in the DEA framework.
Abstract: This paper presents a general model for aggregating votes from a preferential ballot. The thrust of the model is to accord each candidate a fair assessment in terms of his overall standing vis-a-vis first place, second place, ', kth place votes. The form of the model is a combined index Σj = 1kWjvij where vij is the number of the jth place votes received by the ith candidate. The weights Wj are assumed to form a monotonically decreasing sequence with Wj-Wj + 1 ≥ dj, e. These constraints correspond to the assurance region AR side constraints in the DEA framework. The properties of the model are examined in terms of this discrimination intensity function d, and in the special case that dj, e = e, our model is shown to be equivalent to the consensus models of Borda and Kendall.

Journal ArticleDOI
TL;DR: In this article, the authors present a brief discourse on the flaws in Dyer's argument and argue that this criticism arises out of a lack of understanding of the theory underlying the AHP.
Abstract: The paper by J. S. Dyer ( 1990, this issue) presents two arguments against the use of the Analytic Hierarchy Process (AHP): the axioms are "flawed" and the rankings which the AHP produces are "arbitrary". In particular, he takes offense of our earlier claim ( 1987) that much of the criticisms of the AHP are based on a misunderstanding of the theoretical foundations of the AHP. The arguments raised by Dyer are not new (Dyer and Wendell 1985) and were, in fact, the reason for our earlier paper in Management Science (Harker and Vargas 1987). In this note, we would like to respond to Dyer's claim that the AHP is "flawed" and to argue that our initial claim is true: this criticism arises out of a lack of understanding of the theory underlying the AHP. However, we shall not respond on a point-by-point basis since this has been done by Saaty ( 1990, this issue). Thus, we will present a brief discourse on the flaws in Dyer's argument. Before attacking the axiomatic basis of the AHP and the "flawed" nature of the ranking procedure in the AHP, Dyer claims that the questions asked in the AHP are ambiguous; he writes:

Journal ArticleDOI
TL;DR: The study finds that the performance of new drugs introduced during the latter half of the 1970s was markedly better than that of early 1970s introductions, consistent with the more rapid rate of industry growth in real R&D expenditures.
Abstract: This study investigates the returns to R&D for 100 new drugs introduced into the United States during the decade of the 1970s. In contrast to prior studies, it incorporates several significant structural changes that have occurred in the pharmaceutical industry during the 1980s. These include higher real drug prices and a greater degree of generic competition. A major finding is that the return on R&D for the average new drug is approximately equal to the 9 percent industry cost of capital. However, the performance of new drugs introduced during the latter half of the 1970s was markedly better than that of early 1970s introductions. This latter finding is consistent with the more rapid rate of industry growth in real R&D expenditures. The study also finds that the variation in returns is highly skewed, with only the top 30 drugs covering mean R&D costs on a fully allocated basis. Finally, it is shown that real drug price increases in the 1980s were necessary for the average new drug introduction to recover its R&D costs.

Journal ArticleDOI
TL;DR: A class of low complexity heuristics are described and it is shown under mild probabilistic assumptions that the generated solutions are asymptotically optimal within the above class of strategies.
Abstract: We consider distribution systems with a depot and many geographically dispersed retailers each of which faces external demands occurring at constant, deterministic but retailer specific rates. All stock enters the system through the depot from where it is distributed to the retailers by a fleet of capacitated vehicles combining deliveries into efficient routes. Inventories are kept at the retailers but not at the depot. We wish to determine feasible replenishment strategies i.e., inventory rules and routing patterns minimising infinite horizon long-run average transportation and inventory costs. We restrict ourselves to a class of strategies in which a collection of regions sets of retailers is specified which cover all outlets: if an outlet belongs to several regions, a specific fraction of its sales/operations is assigned to each of these regions. Each time one of the retailers in a given region receives a delivery, this delivery is made by a vehicle who visits all other outlets in the region as well in an efficient route. We describe a class of low complexity heuristics and show under mild probabilistic assumptions that the generated solutions are asymptotically optimal within the above class of strategies. We also show that lower and upper bounds on the system-wide costs may be computed and that these bounds are asymptotically tight under the same assumptions. A numerical study exhibits the performance of these heuristics and bounds for problems of moderate size.

Journal ArticleDOI
TL;DR: In this paper, the authors studied optimal pricing and capacity decisions for a service facility in an environment where users' delay cost is important and found necessary and sufficient conditions for the optimality of a pricing rule that charges out service resources at their marginal capacity cost.
Abstract: This paper studies optimal pricing and capacity decisions for a service facility in an environment where users' delay cost is important. The model assumes a general nonlinear delay cost structure and incorporates the tradeoff between the delay cost and capacity cost. We find necessary and sufficient conditions for the optimality of a pricing rule that charges out service resources at their marginal capacity cost. We examine the issue of budgetary balance and find that net-value maximization entails a budget deficit for the service facility; that is, the service facility should be evaluated as a "deficit center." The results provide guidelines under which the optimal magnitude of the deficit can be determined.

Journal ArticleDOI
TL;DR: This paper provides a new technique for modelling such lateral transshipments in continuous review inventory systems with one-for-one replenishments and Poisson demand and applies this technique to a two-echelon system with repairable items.
Abstract: In inventory systems with several bases that support different geographical regions, it is quite common to allow emergency lateral transshipments between the bases. This may be advantageous if neighbouring bases are at shorter distances than the central depot or the external supplier. This paper provides a new technique for modelling such lateral transshipments in continuous review inventory systems with one-for-one replenishments and Poisson demand. We apply this technique to a two-echelon system with repairable items.

Journal ArticleDOI
TL;DR: An algorithm for a mixed set covering/partitioning model that includes as special cases the well-known set covering problem and set partitioning problem is presented.
Abstract: We present an algorithm for a mixed set covering/partitioning model that includes as special cases the well-known set covering problem and set partitioning problem. The novel feature of our algorit...

Journal ArticleDOI
TL;DR: To insure a successful implementation, managers must consider the "fit" between a CMCS and a particular work group.
Abstract: Interactive computer systems should be viewed as “socio-technical” systems whose acceptance is influenced by an interaction among characteristics of the individual users, the groups and organizations in which they are implemented, and the computer systems themselves. Four months after completing baseline questionnaires, new users of four computer-mediated communication systems (CMCS) answered follow-up questionnaires which included a number of items measuring subjective satisfaction. Factor analysis identified two primarily instrumental dimensions (satisfaction with the Interface and with system Performance), and two primarily social-emotional dimensions (Unexpressive—perceived inadequacy of the system for expressive, emotional, or personal communication—and Mode Problems with computer-mediated communication). The strongest correlates of Interface satisfaction are differences in system software and documentation, interacting with baseline attitudes and characteristics of the individual users. By contrast,...

Journal ArticleDOI
TL;DR: In this article, the authors extend Kohli and Krishnamurti's 1987 dynamic-programming heuristic for selecting a single item maximizing share to structure product lines maximizing share, seller's return, or buyers' utilitarian welfare.
Abstract: Recently proposed methods for product-line selection use the total utilities of candidate items to construct product lines maximizing seller's return or buyers' welfare. For conjoint hybrid conjoint data, enumerating the utilities of candidate items can be computationally infeasible if the number of attributes and attribute levels is large and most multi-attribute alternatives are feasible. For such problems, constructing product lines directly from part-worths data is preferable. We propose such methods, extending Kohli and Krishnamurti's 1987 dynamic-programming heuristic for selecting a single item maximizing share to structure product lines maximizing share, seller's return, or buyers' utilitarian welfare. The computational performance of the heuristics and their approximation of product-line solutions is evaluated using simulated data. Across problem instances, the dynamic-programming heuristics identify solutions that are no worse, in terms of approximating optimal solutions, to the solutions of heuristics for the current two-step approaches to product-line design. An application using hybrid-conjoint data for a consumer-durable product is described.