# Showing papers in "Management Science in 1984"

••

TL;DR: The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs as mentioned in this paper.

Abstract: In management contexts, mathematical programming is usually used to evaluate a collection of possible alternative courses of action en route to selecting one which is best. In this capacity, mathematical programming serves as a planning aid to management. Data Envelopment Analysis reverses this role and employs mathematical programming to obtain ex post facto evaluations of the relative efficiency of management accomplishments, however they may have been planned or executed. Mathematical programming is thereby extended for use as a tool for control and evaluation of past accomplishments as well as a tool to aid in planning future activities. The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs. A separation into technical and scale efficiencies is accomplished by the methods developed in this paper without altering the latter conditions for use of DEA directly on observational data. Technical inefficiencies are identified with failures to achieve best possible output levels and/or usage of excessive amounts of inputs. Methods for identifying and correcting the magnitudes of these inefficiencies, as supplied in prior work, are illustrated. In the present paper, a new separate variable is introduced which makes it possible to determine whether operations were conducted in regions of increasing, constant or decreasing returns to scale in multiple input and multiple output situations. The results are discussed and related not only to classical single output economics but also to more modern versions of economics which are identified with "contestable market theories."

14,941 citations

••

TL;DR: In this paper, the authors test a model of the organizational innovation process that suggests that the strategy-structure causal sequence is differentiated by radical versus incremental innovation, while more traditional strategy and structure arrangements tend to support new product introduction and incremental process adoption.

Abstract: The purpose of this study was to test a model of the organizational innovation process that suggests that the strategy-structure causal sequence is differentiated by radical versus incremental innovation. That is, unique strategy and structure will be required for radical innovation, especially process adoption, while more traditional strategy and structure arrangements tend to support new product introduction and incremental process adoption. This differentiated theory is strongly supported by data from the food processing industry. Specifically, radical process and packaging adoption are significantly promoted by an aggressive technology policy and the concentration of technical specialists. Incremental process adoption and new product introduction tends to be promoted in large, complex, decentralized organizations that have market dominated growth strategies.
Findings also suggest that more traditional structural arrangements might be used for radical change initiation if the general tendencies that occur in these dimensions as a result of increasing size can be delayed, briefly modified, or if the organization can be partitioned structurally for radical vs. incremental innovation. In particular, centralization of decision making appears to be necessary for radical process adoption along with the movement away from complexity toward more organizational generalists. This suggests that a greater support of top managers in the innovation process is necessary to initiate and sustain radical departures from the past for that organization.

1,487 citations

••

TL;DR: The authors present a conceptual framework into which previous research has been mapped that can provide direction to future efforts and a set of variables that have been proposed as potentially impacting the relationship between user involvement and system success.

Abstract: User involvement in the design of computer-based information systems is enthusiastically endorsed in the prescriptive literature. However determining when and how much, or even if, user involvement is appropriate are questions that have received inadequate research attention. In this paper research that examines the link between user involvement and indicators of system success is reviewed. The authors find that much of the existing research is poorly grounded in theory and methodologically flawed; as a result, the benefits of user involvement have not been convincingly demonstrated. Until higher quality studies are completed intuition, experience, and unsubstantiated prescriptions will remain the practitioner's best guide to the determination of appropriate levels and types of user involvement; these will generally suggest that user involvement is appropriate for unstructured problems or when user acceptance is important.
In order to foster higher quality integrated research and to increase understanding of the user involvement-system success relationship, the authors present the following: a conceptual framework into which previous research has been mapped that can provide direction to future efforts; a review of existing measures of user involvement and system success; a set of variables that have been proposed as potentially impacting the relationship between user involvement and system success.

1,437 citations

••

TL;DR: In this paper, a review of recent literature on the corporate life cycle disclosed five common stages: birth, growth, maturity, revival, and decline, and a sample of 161 periods of history from 36 firms were classified into the five life cycle stages using a few attributes deemed central to each.

Abstract: A review of recent literature on the corporate life cycle disclosed five common stages: birth, growth, maturity, revival, and decline. Theorists predicted that each stage would manifest integral complementarities among variables of environment "situation", strategy, structure and decision making methods; that organizational growth and increasing environmental complexity would cause each stage to exhibit certain significant differences from all other stages along these four classes of variables; and that organizations tend to move in a linear progression through the five stages, proceeding sequentially from birth to decline. These contentions were tested by this study. A sample of 161 periods of history from 36 firms were classified into the five life cycle stages using a few attributes deemed central to each. Analyses of variance were performed on 54 variables of strategy, structure, environment and decision making style. The results seemed to support the prevalence of complementarities among variables within each stage and the predicted inter-stage differences. They did not, however, show that organizations went through the stages in the same sequence.

1,337 citations

••

TL;DR: In this paper, the authors describe the activities of venture capitalists as an orderly process involving five sequential steps: deal origination, deal screening, deal evaluation, deal structuring, negotiation of the price of the deal, and the covenants which limit the risk of the investor.

Abstract: The paper describes the activities of venture capitalists as an orderly process involving five sequential steps. These are 1 Deal Origination: The processes by which deals enter into consideration as investment prospects, 2 Deal Screening: A delineation of key policy variables which delimit investment prospects to a manageable few for in-depth evaluation, 3 Deal Evaluation: The assessment of perceived risk and expected return on the basis of a weighting of several characteristics of the prospective venture and the decision whether or not to invest as determined by the relative levels of perceived risk and expected return, 4 Deal Structuring: The negotiation of the price of the deal, namely the equity relinquished to the investor, and the covenants which limit the risk of the investor, 5 Post-Investment Activities: The assistance to the venture in the areas of recruiting key executives, strategic planning, locating expansion financing, and orchestrating a merger, acquisition or public offering. 41 venture capitalists provided data on a total of 90 deals which had received serious consideration in their firms. The questionnaire measured the mechanism of initial contact between venture capitalist and entrepreneur, the venture's industry, the stage of financing and product development, ratings of the venture on 23 characteristics, an assessment of the potential return and perceived risk, and the decision vis-i-vis whether to invest. The modal venture represented in the database was a start-up in the electronics industry with a production capability in place and seeking $1 million median in outside financing. There is a high degree of cross-referrals between venture capitalists, particularly for the purposes of locating co-investors. Factor analysis reduced the 23 characteristics of the deal to five underlying dimensions namely 1 Market Attractiveness size, growth, and access to customers, 2 Product Differentiation uniqueness, patents, technical edge, profit margin, 3 Managerial Capabilities skills in marketing, management, finance and the references of the entrepreneur, 4 Environmental Threat Resistance technology life cycle, barriers to competitive entry, insensitivity to business cycles and down-side risk protection, 5 Cash-Out Potential future opportunities to realize capital gains by merger, acquisition or public offering. The results of regression analyses showed expected return to be determined by Market Attractiveness and Product Differentiation R2 = 0.22. Perceived risk is determined by Managerial Capabilities and Environmental Threat Resistance R2 = 0.33. Finally, a discriminant analysis correctly predicted, in 89.4% of the cases, whether or not a venture capitalist was willing to commit funds to the deal on the basis of the expected return and perceived risk. The reactions of seven venture capitalists who reviewed the model's specification were used to test its validity.

1,066 citations

••

TL;DR: In this article, the authors analyze how a supplier can structure the terms of an optimal quantity discount schedule to maximize the supplier's incremental net profit and cash flow by adjusting its present pricing schedule to entice a major customer to increase its present order size by a factor of "K".

Abstract: In this paper, we analyze how a supplier can structure the terms of an optimal quantity discount schedule. The vendor's challenge is to adjust his present pricing schedule to entice his major customer to increase his present order size by a factor of "K." Optimal levels for "K" and the corresponding price discount are determined in order to maximize the supplier's incremental net profit and cash flow. Implementation issues are discussed and future research needs identified.

670 citations

••

TL;DR: In this paper, the authors describe the nature and design of post-industrial organizations, and examine designs for making more effective three processes that will exhibit increased importance in post industrial organizations: decision-making, innovation, and information acquisition and distribution.

Abstract: This paper describes the nature and design of post-industrial organizations. It begins with an assessment of the popular literature on post-industrial society, and finds that this literature is an inappropriate basis for inferring the nature of post-industrial organizations. Partly as a consequence of this finding, the paper turns to systems theory as a basis for determining both the nature of post-industrial society and the nature of the increased demands that this environment would impose on post-industrial organizations. The middle three sections of the paper describe design features that post-industrial organizations will employ to deal with these demands. In particular they examine designs for making more effective three processes that will exhibit increased importance in post-industrial organizations: 1 decision-making, 2 innovation, and 3 information acquisition and distribution.
In addition to its conclusions concerning the design features that post-industrial organizations will possess, the paper sets forth three general conclusions. One of these is that, even though the aggregate of the demands on post-industrial organizations will be qualitatively greater than that experienced by previous organizations, there are design features that organizations can adopt that will enable them to cope with even worst-case loadings of these demands. A second conclusion is that the nature of the post-industrial environment will cause decision-making, innovation, and information acquisition and distribution to take on added importance in post-industrial organizations, and that one result of this will be that organizations will attempt to ensure routine effectiveness of these processes through increased formalization. In some cases this formalization will have as its purpose ensuring the existence of informal or at least unstructured activities, such as experimentation by "self-designing" organizations or acquisition of "soft" information by top managers.
The third conclusion set forth is that during the current transition period between the industrial and post-industrial societies we can expect many organizations to fail, or to flee to less than wholly desirable niches, because they are ignorant of the post-industrial technologies, structures, and processes that would enable them to successfully engage the post-industrial environment and to become viable post-industrial organizations. It appears that an important task of organizational and management scientists during this period will be to aid in the development, transfer, and implementation of post-industrial design features and in this way help reduce the possibility of unnecessary failure or flight.

656 citations

••

TL;DR: In this paper, the authors examined a key tenet from each of these literatures in an effort to construct a robust model of innovative behavior and found significant differences in the factors influencing administrative and technical innovations with organizational receptivity toward change important only for the technical innovations.

Abstract: Because many organizations have not been successful in introducing new task and managerial methods into the workplace, considerable attention has been directed toward developing a more complete understanding of organizational innovation. Three separate literatures organizational science, engineering/RD however, surprisingly little integration among the three has occurred. This paper reports on a study which examined a key tenet from each of these literatures in an effort to construct a robust model of innovative behavior. Specifically, the study utilized survey data in examining the validity of "push-pull" theory i.e., that innovation is most likely to occur when a need and a means to resolve that need are simultaneously recognized as well as the importance of top management attitude toward an innovation and of organizational receptivity toward change. The research context involved the diffusion of six modern software practices into 47 software development groups.
While the model's independent variables explained a rather large amount of the variance in the use of these modern software practices, "push-pull" theory was not validated. A number of explanations are offered for the apparent failure of "push-pull" theory. Top management attitude and organizational receptivity toward change, however, were generally found to influence organizational innovation. As hypothesized, significant differences emerged in the factors influencing administrative and technical innovations with organizational receptivity toward change important only for the technical innovations. This suggests that organizational processes facilitating innovation should vary depending on the nature of the innovation involved.

572 citations

••

TL;DR: This paper examines the special case of the two-level linear programming problem and presents geometric characterizations and algorithms to demonstrate the tractability of such problems and motivate a wider interest in their study.

Abstract: Decentralized planning has long been recognized as an important decision making problem. Many approaches based on the concepts of large-scale system decomposition have generally lacked the ability to model the type of truly independent subsystems which often exist in practice. Multilevel programming models partition control over decision variables among ordered levels within a hierarchical planning structure. A planner at one level of the hierarchy may have his objective function and set of feasible decisions determined, in part, by other levels. However, his control instruments may allow him to influence the policies at other levels and thereby improve his own objective function. This paper examines the special case of the two-level linear programming problem. Geometric characterizations and algorithms are presented with some examples. The goal is to demonstrate the tractability of such problems and motivate a wider interest in their study.

539 citations

••

TL;DR: This paper surveys the tactical aspects of this interaction between sequencing priorities and the method of assigning due-dates, focusing primarily on average tardiness as a measure of scheduling effectiveness.

Abstract: Recent research studies of job shop scheduling have begun to examine the interaction between sequencing priorities and the method of assigning due-dates. This paper surveys the tactical aspects of this interaction, focusing primarily on average tardiness as a measure of scheduling effectiveness. The discussion highlights several factors that can affect the performance of dispatching rules, such as the average flow allowance, the due-date assignment method, and the use of progress milestones. A set of simulation experiments illuminates how these factors interact with the dispatching rule, and the experimental results suggest which combinations are most effective in a scheduling system.

475 citations

••

TL;DR: In this paper, a comprehensive study of methods for assessing unidimensional expected utility functions is presented, including preference comparison methods, probability equivalence methods, value equivalence method, certainty equivalence, hybrid methods, paired-gamble methods, and other approaches.

Abstract: This paper is a comprehensive study of methods for assessing unidimensional expected utility functions. The paper describes the utility assessment process in decision analysis and then reviews problem formulation, sources of bias in preference judgments, and the analysis of risk attitudes. Two dozen utility assessment methods of which half appear for the first time are critically examined. These methods are grouped into preference comparison methods, probability equivalence methods, value equivalence methods, certainty equivalence methods, hybrid methods, paired-gamble methods, and other approaches. The paper emphasizes the nature of judgmental biases in comparing different assessment procedures. Since most multiattribute utility functions are decomposed into single-attribute functions, this study should facilitate such applications. The paper concludes with several directions for further developmental, empirical, and applied research.

••

TL;DR: This paper describes a practical algorithm for large-scale mean-variance portfolio optimization that can be made extremely efficient by "sparsifying" the covariance matrix with the introduction of a few additional variables and constraints, and by treating the transaction cost schedule as an essentially nonlinear nondifferentiable function.

Abstract: This paper describes a practical algorithm for large-scale mean-variance portfolio optimization. The emphasis is on developing an efficient computational approach applicable to the broad range of portfolio models employed by the investment community. What distinguishes these from the "usual" quadratic program is i the form of the covariance matrix arising from the use of factor and scenario models of return, and ii the inclusion of transactions limits and costs. A third aspect is the question of whether the problem should be solved parametrically in the risk-reward trade off parameter, λ, or separately for several discrete values of λ. We show how the parametric algorithm can be made extremely efficient by "sparsifying" the covariance matrix with the introduction of a few additional variables and constraints, and by treating the transaction cost schedule as an essentially nonlinear nondifferentiable function. Then we show how these two seemingly unrelated approaches can be combined to yield good approximate solutions when minimum trading size restrictions "buy or sell at least a certain amount, or not at all" are added. In combination, these approaches make possible the parametric solution of problems on a scale not heretofore possible on computers where CPU time and storage are the constraining factors.

••

TL;DR: The authors examined a diffusion model for products in which negative information plays a dominant role, discusses its implications for optimal advertising timing policy and presents an application to forecast attendance for the movie Gandhi in the Dallas area.

Abstract: Existing innovation diffusion models assume that individual experience with the product is always communicated positively through word-of-mouth. For certain innovations, however, this assumption is tenuous since communicators of the product experience may transfer favorable, unfavorable, or indifferent messages through word-of-mouth. This paper examines a diffusion model for products in which negative information plays a dominant role, discusses its implications for optimal advertising timing policy and presents an application to forecast attendance for the movie Gandhi in the Dallas area.

••

TL;DR: This overview concentrates on those techniques which require an articulation of the decision maker's preference structure either during or after the optimization, since these are the areas where most of the recent research has been conducted.

Abstract: Multiobjective mathematical programming has been one of the fastest growing areas of OR/MS during the last 15 years. This paper presents: some reasons for the rapidly growing increase in interest in multiobjective mathematical programming, a discussion of the advantages and disadvantages of the three general approaches (articulation of the decision maker's preference structure over the multiple objectives prior to, during, or after the optimization) towards multiobjective mathematical programming, a nontechnical overview of many of the specific solution techniques for multiobjective mathematical programming, and a discussion of important areas for further research. The overview concentrates on those techniques which require an articulation of the decision maker's preference structure either during or after the optimization, since these are the areas where most of the recent research has been conducted. It differs from previous overviews in that, in addition to the timing of the elicited preference informa...

••

TL;DR: In this article, a model of buyer reaction to any given pricing scheme is developed to show that there exists a unified pricing policy which motivates the buyer to increase its ordering quantity per order, thereby reducing the joint buyer and seller ordering and holding costs.

Abstract: This paper addresses the problem of why and how a seller should develop a discount pricing structure even if such a pricing structure does not alter ultimate demand. The situation modeled is most appropriate where the seller's product does not represent a major component of the buyer's final product, where the demand for the product is derived, or where the price is only one of many factors considered in making a purchase decision. A model of buyer reaction to any given pricing scheme is developed to show that there exists a unified pricing policy which motivates the buyer to increase its ordering quantity per order, thereby reducing the joint buyer and seller ordering and holding costs. As a result, the seller is able to reduce its costs while leaving the buyer no worse off and often better off. The model is extended to handle variable ordering and shipping costs and situations where the seller faces numerous groups of buyers, each having different ordering policies. Finally a case study is presented explicitly showing how the proposed pricing policy can be applied to the situation of a large seller selling to a number of different buyer groups.

••

TL;DR: A recurring problem in managing project activity involves the allocation of scarce resources to the individual activities comprising the project Resource conflict resolution decisions must be made whenever the concurrent demand for resources by the competing activities of a project exceeds resource availability.

Abstract: A recurring problem in managing project activity involves the allocation of scarce resources to the individual activities comprising the project Resource conflict resolution decisions must be made whenever the concurrent demand for resources by the competing activities of a project exceeds resource availability. When these resource conflict resolution decisions arise, project managers seek direction on which activities to schedule and which to delay in order that the resulting increase in project duration is the minimum that can be achieved with the given resource availabilities. The procedures examined in this paper are all designed to provide for this type of decision support. Each procedure examined is enumerative based, methodically searching the set of possible solutions in such a way that not all possibilities need be considered individually. The methods differ in the manner in which the tree representing partial schedules is generated and is saved, and differ in the methods which are used to identify and discard inferior partial schedules. Each procedure was found to be generally superior on a specific class of problems, and these classes are identified.

••

Abstract: Consider a central depot or plant which supplies several locations experiencing random demands. Orders are placed or production is initiated periodically by the depot. The order arrives after a fixed lead time, and is then allocated among the several locations. The depot itself does not hold inventory. The allocations are finally received at the demand points after another lag. Unfilled demand at each location is backordered. Linear costs are incurred at each location for holding inventory and for backorders. Also, costs are assessed for orders placed by the depot. The object is to minimize the expected total cost of the system over a finite number of time periods.
This system gives rise to a dynamic program with a state space of very large dimension. We show that this model can be systematically approximated by a single-location inventory problem. All the qualitative and quantitative results for such problems can then be applied.

••

TL;DR: In this article, the authors examined the relationship between communication and technological innovation and found that at the individual level, the frequency, centrality, and diversity of communication all have positive effects on the success of technological innovation.

Abstract: This study examined the relationship between communication and technological innovation. It focused on the patterns of technical communication among researchers and organizations to find out if these patterns had any effect on the success of technological innovation. The objectives were to: (1) investigate the effects of communication on technological innovation at an individual level, and (2) study the effects of interorganizational communication on technological innovation. Data were gathered from the principal investigators of 117 Sea Grant research projects, which were randomly selected from a sampling frame of 495 projects. Bivariate correlation, and partial correlation, were employed in analyzing the data. The findings indicate that at the individual level, the frequency, centrality, and diversity of communication all have positive effects on the success of technological innovation. However, the frequency of communication was found to have a greater effect than either centrality or diversity of comm...

••

TL;DR: Multi-item capacitated lot-sizing problems are reformulated using a class of valid inequalities, which are facets for the single-item uncapacitated problem, and problems with up to 20 items and 13 periods have been solved to optimality using a commercial mixed integer code.

Abstract: Multi-item capacitated lot-sizing problems are reformulated using a class of valid inequalities, which are facets for the single-item uncapacitated problem. Computational results using this reformulation are reported, and problems with up to 20 items and 13 periods have been solved to optimality using a commercial mixed integer code. We also show how the valid inequalities can easily be generated as part of a cutting plane algorithm, and suggest a further class of inequalities that is useful for single-item capacitated problems.

••

TL;DR: This paper developed an indirect method to estimate utility and willingness to pay WTP for reductions in the risk of death at various ages using a life-cycle model of consumption, assuming that an individual sets his consumption level each year so as to maximize his expected lifetime utility.

Abstract: We develop an indirect method to estimate utility and willingness to pay WTP for reductions in the risk of death at various ages. Using a life-cycle model of consumption, we assume that an individual sets his consumption level each year so as to maximize his expected lifetime utility. Alternative assumptions about opportunities for borrowing and annuities characterize two polar types of societies. In our Robinson Crusoe case, an individual must be entirely self-sufficient, and annuities are not available. In our perfect markets case, an individual can borrow against future earnings and purchase actuarially fair annuities; we show that under these assumptions WTP is the sum of livelihood discounted expected future earnings and consumer surplus.
To illustrate our methods, we derive WTP for an average financially independent American man under plausible assumptions. The model is calibrated to 1978 earnings e.g., $18,000 per year for men aged 45-54 with at least some income. In the Robinson Crusoe case, WTP increases from $500,000 at age 20 to a peak of $1,250,000 at age 40, and declines to $630,000 at age 60. In the perfect markets case, age variations are less pronounced; WTP is $1,050,000 at age 20, peaks at $1,070,000 at age 25, and declines to $600,000 at age 60. These results suggest that individuals value risks to their lives at several times the pro-rata share of their future earnings.

••

TL;DR: A review of the recent operations research contributions to blood inventory management theory and practice can be found in this article, where the authors address several important issues from a unified perspective of theory and practices, and point out new areas for further research.

Abstract: Blood Inventory Management has attracted significant interest from the Operations Research profession during the last 15 years. A number of methodological contributions have been made in the areas of inventory theory and combinatoric optimization that can be of use to other products or systems. These contributions include the development of exact and approximate ordering and issuing policies for an inventory system, the analysis of LIFO or multi-product systems, and various forms of distribution scheduling. In addition, many of these results have been implemented, either as decision rules for efficient blood management in a hospital, or as decision support systems for hierarchical planning in a Regional Blood Center.
In this paper we attempt a review of the recent Operations Research contributions to blood inventory management theory and practice. Whereas many problems have been solved, others remain open and new ones keep being created with advances in medical technology and practices. Our approach is not to present an exhaustive review of all the literature in the field, but rather to address several important issues from a unified perspective of theory and practice, and point out new areas for further research.

••

TL;DR: In this paper, an interactive method employing pairwise comparisons of attainable solutions is developed for solving the discrete, deterministic multiple criteria problem assuming a single decision maker who has an implicit quasi-concave increasing utility (or value) function.

Abstract: An interactive method employing pairwise comparisons of attainable solutions is developed for solving the discrete, deterministic multiple criteria problem assuming a single decision maker who has an implicit quasi-concave increasing utility (or value) function. The method chooses an arbitrary set of positive multipliers to generate a proxy composite linear objective function which is then maximized over the set of solutions. The maximizing solution is compared with several solutions using pairwise judgments asked of the decision maker. Responses are used to eliminate alternatives using convex cones based on expressed preferences, and then a new set of weights is found that satisfies the indicated preferences. The requisite theory and proofs as well as a detailed numerical example are included. In addition, the results of some computational experiments to test the effectiveness of the method are described.

••

TL;DR: In this paper, a process-theoretic approach is employed to postulate a process model of the natural logic evident in organizational policy making, which is used to explain how the policies of a sample firm for which 20 years of data are available became adopted and how, together with critical events, this caused the firm to evolve in particular directions rather than others.

Abstract: A process-theoretic approach, seldom used but not without promise for organizational behavior research, is employed to postulate a process model of the natural logic evident in organizational policy making. The model is used to explain how the policies of a sample firm for which 20 years of data are available became adopted and how, together with critical events, this caused the firm to evolve in particular directions rather than others. Implications of the study are put forward in terms of identifying pathologies of the policy making process. Some prescriptions are put forward for the proper control of organizations by supervisory bodies, such as boards of directors. It is suggested that Management Science, in the form of systematic procedures for adaptive organizational design and updatable cause maps, may have an important future role to play in senior management affairs.

••

TL;DR: In this paper, the scheduling of lot sizes in multistage production environments is a fundamental problem in many Material Requirements Planning Systems, and many heuristics have been suggested for this problem with...

Abstract: The scheduling of lot sizes in multistage production environments is a fundamental problem in many Material Requirements Planning Systems. Many heuristics have been suggested for this problem with ...

••

TL;DR: Using a new classification scheme, the paper introduces a variety of distance-constrained problems defined in a unified manner, and presents integer programming formulations of several new problems along with the results of applying linear programming relaxation methods.

Abstract: This paper concerns a class of network location problems with minimum or maximum separation requirements between uncapacitated facilities or between demand points and the facilities, or both. Its purpose is three-fold. First, it is to recognize distance constraints as increasing real life restrictions through various motivating illustrations. Using a new classification scheme, the paper introduces a variety of distance-constrained problems defined in a unified manner. These include a number of new problems. Second, it is to survey existing solution techniques, available only for a few of such constrained problems. Finally, it is to shed some light on yet unstudied problems by exploring possible extensions of some of the known solution techniques or discussing varying degrees of difficulties involved. In particular, the paper presents integer programming formulations of several new problems along with the results of applying linear programming relaxation methods. Although the computational experience is somewhat disappointing for some of these problems, the results provide greater insight into the problems. With the stated purpose, it is hoped that this paper will stimulate future research in this important problem area.

••

TL;DR: It is found that Illinois strip mines are fairly efficient relative to each other, and suggestive evidence that inefficient mines tend to have relatively high stripping ratios, high labor-output ratios, single rather than multiple coal seams, and lower earth-moving capacity.

Abstract: The purpose of this paper is to apply a generalized version of the Farrell measure of technical efficiency to a sample of Illinois strip mines. We disaggregate the original Farrell measure which was designed to measure lost output or wasted inputs due to underutilization of inputs into three mutually exclusive and exhaustive components: 1 a measure of purely technical efficiency, 2 a measure of input congestion overutilization of some inputs and 3 a measure of scale efficiency. This approach has the advantage that it provides additional information on the sources of inefficiency of production, which should be useful to managers, in general, not only in strip mining. Simple linear programming techniques are derived and used in calculating these efficiency measures for our sample. We find that Illinois strip mines are fairly efficient relative to each other. The major source of inefficiency was due to deviations from the optimal scale of production. We also find suggestive evidence that inefficient mines tend to have relatively high stripping ratios, high labor-output ratios, single rather than multiple coal seams, and lower earth-moving capacity.

••

TL;DR: In this paper, the authors argue that there is insufficient justification for using any function of N fatalities to model societal impacts, and they propose that models based solely on functions of N be abandoned in favor of models that elaborate in detail the significant events and consequences likely to result from an accident.

Abstract: A number of proposals have been put forth regarding the proper way to model the societal impact of fatal accidents. Most of these proposals are based on some form of utility function asserting that the social cost or disutility of N lives lost in a single accident is a function of Nα. A common view is that a single large accident is more serious than many small accidents producing the same number of fatalities, hence α > 1. Drawing upon a number of empirical studies, we argue that there is insufficient justification for using any function of N fatalities to model societal impacts. The inadequacy of such models is attributed, in part, to the fact that accidents are signals of future trouble. The societal impact of an accident is determined to an important degree by what it signifies or portends. An accident that causes little direct harm may have immense consequences if it increases the judged probability and seriousness of future accidents. We propose that models based solely on functions of N be abandoned in favor of models that elaborate in detail the significant events and consequences likely to result from an accident.

••

TL;DR: The paper describes the results of an experimental test of a theoretical model which explains under what circumstances the interaction between MIS-users and EDP-specialists during the design of an EDP system is explained.

Abstract: The paper describes the results of an experimental test of a theoretical model which explains under what circumstances the interaction between MIS-users and EDP-specialists during the design of an ...

••

TL;DR: A modeling framework is proposed within which certain kinds of benefit interactions, called present value PV interactions, are assessed indirectly by explicitly modeling R and D project impacts on profitability.

Abstract: One reason existing approaches for dealing with benefit interactions in economic R and D project selection models are difficult to apply is that it is difficult to assess the interactions directly. This difficulty can be traced at least in part to the lack of a modeling framework within which different types of interaction can be identified and related to project and portfolio benefit. In this paper, a modeling framework is proposed within which certain kinds of benefit interactions, called present value PV interactions, are assessed indirectly by explicitly modeling R and D project impacts on profitability. Within the proposed framework, the role of traditionally recognized types of interaction in the calculation of present value is clarified, and it is shown that PV interaction exists even when traditionally recognized types of interaction are assumed to be absent. The proposed framework offers one method for assessing PV interactions. An example illustrates the framework and shows that ignoring PV interactions can result in both nonoptimal project selections and resource allocations, even when traditionally recognized types of interactions are absent. The framework and resulting model should be useful in enhancing decision making in firms using PV-based approaches to project selection, whether or not they are interested in using a model that accounts for PV interactions. This is expected since the overall framework provides a basis for analyzing the probable consequences of assuming that no PV interaction is present and for communicating these probable consequences to management

••

TL;DR: In this article, the authors proposed new concepts and new results which could lead to a more realistic preference modeling than in classical decision theory, and they presented four fundamental situations of preferences, their combinations and the concept of relational system of preferences.

Abstract: This paper proposes new concepts and new results which could lead to a more realistic preference modeling than in classical decision theory. Sections 1–3 present four fundamental situations of preferences, their combinations and the concept of relational system of preferences. In §4, a particular case of relational system of preference is studied. It is associated with the concept of pseudo-criterion derived from the classical concept of criterion by adjunction of two thresholds. Some results are given, generalizing the properties of such well-known structures as complete preorders and semiorders. Sections 5 and 6 emphasize the possibilities given by the preceding concepts to take the imprecisions, irresolutions and incomparabilities appearing in every concrete problem where several criteria must be considered into account.