scispace - formally typeset
Search or ask a question

Showing papers in "Management Science in 1996"


Journal ArticleDOI
TL;DR: In this paper, a theoretical model is proposed that links strong environmental management to improved perceived future financial performance, as measured by stock market performance, and the linkage to firm performance is tested empirically using financial event methodology and archival data of firm-level environmental and financial performance.
Abstract: Environmental management has the potential to play a pivotal role in the financial performance of the firm. Many individuals suggest that profitability is hurt by the higher production costs of environmental management initiatives, while others cite anecdotal evidence of increased profitability. A theoretical model is proposed that links strong environmental management to improved perceived future financial performance, as measured by stock market performance. The linkage to firm performance is tested empirically using financial event methodology and archival data of firm-level environmental and financial performance. Significant positive returns were measured for strong environmental management as indicated by environmental performance awards, and significant negative returns were measured for weak environmental management as indicated by environmental crises. The implicit financial market valuation of these events also was estimated. Cross-sectional analysis of the environmental award events revealed differences for first-time awards and between industries. First-time award announcements were associated with greater increases in market valuation, although smaller increases were observed for firms in environmentally dirty industries, possibly indicative of market skepticism. This linkage between environmental management and financial performance can be used by both researchers and practitioners as one measure of the benefits experienced by industry leaders, and as one criterion against which to measure investment alternatives.

2,468 citations


Journal ArticleDOI
TL;DR: In this article, the authors used new firm-level data on several components of IS spending for 1987-1991 and found that the gross marginal product MP for computer capital averaged 81% for the firms in their sample.
Abstract: The "productivity paradox" of information systems IS is that, despite enormous improvements in the underlying technology, the benefits of IS spending have not been found in aggregate output statistics. One explanation is that IS spending may lead to increases in product quality or variety which tend to be overlooked in the aggregate statistics, even if they increase output at the firm-level. Furthermore, the restructuring and cost-cutting that are often necessary to realize the potential benefits of IS have only recently been undertaken in many firms. Our study uses new firm-level data on several components of IS spending for 1987-1991. The dataset includes 367 large firms which generated approximately 1.8 trillion dollars in output in 1991. We supplemented the IS data with data on other inputs, output, and price deflators from other sources. As a result, we could assess several econometric models of the contribution of IS to firm-level productivity. Our results indicate that IS spending has made a substantial and statistically significant contribution to firm output. We find that the gross marginal product MP for computer capital averaged 81% for the firms in our sample. We find that the MP for computer capital is at least as large as the marginal product of other types of capital investment and that, dollar for dollar, IS labor spending generates at least as much output as spending on non-IS labor and expenses. Because the models we applied were similar to those that have been previously used to assess the contribution of IS and other factors of production, we attribute the different results to the fact that our data set is more current and larger than others explored. We conclude that the productivity paradox disappeared by 1991, at least in our sample of firms.

2,233 citations


Journal ArticleDOI
TL;DR: A confirmatory, empirical test of the revised Technology Acceptance Model TAM confirmed that the TAM is a valuable tool for predicting intentions to use an IS and introduced and objective measure of technology acceptance, actual usage rather than self-report usage, which supports that self- report usage may not be an appropriate surrogate measure for actual usage.
Abstract: Davis et al. Davis, F. D., R. P. Bagozzi, P. R. Warshaw. 1989. User acceptance of computer technology: A comparison of two theoretical models. Management Sci.358 982-1003. proposed, tested, and revised the Technology Acceptance Model TAM, which attempts to explain and predict why users sometimes accept and sometimes reject information systems IS. The research reported here 1 provides a confirmatory, empirical test of the revised TAM and 2 introduces and objective measure of technology acceptance, actual usage rather than self-report usage. Subjects' beliefs about the usefulness and ease of use of an electronic mail system, their intentions to use the system, and their usage of it 15 weeks later were measured in a longitudinal study. The results confirmed that the TAM is a valuable tool for predicting intentions to use an IS. The findings here combined with results from other studies in this area suggest that the original TAM may be more appropriate than the two-version revised TAM. However, the addition of an experience component to the original TAM may be a significant enhancement. In addition, the results support that self-report usage may not be an appropriate surrogate measure for actual usage.

1,664 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed and tested theories that explain the variation in the organizational complexity-innovation relationship in greater detail, considering two major indicators of organizational complexity -structural complexity and organizational size.
Abstract: Current research in organizational innovation is extensive, yet, because of limitations in scope, most studies are not adequately encompassing. These studies typically relate organizational variables to innovation and control at most for the effect of one contingency factor. Because innovation depends upon a complex host of factors, such theories have limited predictive application. This study intends to develop and test theories that explain the variation in the organizational complexity-innovation relationship in greater detail. The study considers two major indicators of organizational complexity-structural complexity and organizational size. Hypotheses are proposed on the effects of 14 contingency factors on the relationships between structural complexity and innovation and organizational size and innovation. The contingency factors include environmental uncertainty, organizational size, industrial sectors, types of innovation, and stages of innovation adoption. Using a meta-analytic procedure for multivariate analysis, the hypotheses are then tested with data from published studies in organizational innovation during the last three decades. The effects of four methods variables-operational definitions of innovation, structural complexity and size, and similarity of data sources-are controlled for in testing the hypotheses. This process results in two powerful and encompassing models: 1 the association between structural complexity and innovation depends upon operational definition of complexity, environmental uncertainty, use of manufacturing organizations, use of service organizations, focus on technical innovations, focus on product innovations, and focus on implementation of innovation; and 2 the association between organizational size and innovation depends upon operational definition of size, environmental uncertainty, use of service organizations, use of for-profit organizations, focus on technical innovations, and focus on product innovations. These models suggest avenues for further theory development and research, which we discuss.

1,381 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed two preference conditions that are necessary and sufficient for concavity and convexity of the weighting function, and tested these conditions using preference ladder data with weighting functions proposed by Tversky and Kahneman.
Abstract: When individuals choose among risky alternatives, the psychological weight attached to an outcome may not correspond to the probability of that outcome. In rank-dependent utility theories, including prospect theory, the probability weighting function permits probabilities to be weighted nonlinearly. Previous empirical studies of the weighting function have suggested an inverse S-shaped function, first concave and then convex. However, these studies suffer from a methodological shortcoming: estimation procedures have required assumptions about the functional form of the value and/or weighting functions. We propose two preference conditions that are necessary and sufficient for concavity and convexity of the weighting function. Empirical tests of these conditions are independent of the form of the value function. We test these conditions using preference "ladders" a series of questions that differ only by a common consequence. The concavity-convexity ladders validate previous findings of an S-shaped weighting function, concave up to p < 0.40, and convex beyond that probability. The tests also show significant nonlinearity away from the boundaries, 0 and 1. Finally, we fit the ladder data with weighting functions proposed by Tversky and Kahneman Tversky, Amos, Daniel Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. J. Risk and Uncertainty5 297-323. and Prelec Prelec, Dražen. 1995. The probability weighting function. Unpublished paper..

1,007 citations


Journal ArticleDOI
TL;DR: A fast and easily implementable approximation algorithm for the problem of finding a minimum makespan in a job shop is presented, based on a taboo search technique with a specific neighborhood definition which employs a critical path and blocks of operations notions.
Abstract: A fast and easily implementable approximation algorithm for the problem of finding a minimum makespan in a job shop is presented. The algorithm is based on a taboo search technique with a specific neighborhood definition which employs a critical path and blocks of operations notions. Computational experiments up to 2,000 operations show that the algorithm not only finds shorter makespans than the best approximation approaches but also runs in shorter time. It solves the well-known 10 × 10 hard benchmark problem within 30 seconds on a personal computer.

964 citations


Journal ArticleDOI
TL;DR: In this article, the authors identify several predictors of joint venture failure and test for their influences, finding that the presence of competition between joint venture partners outside of the agreement significantly impairs chances for the operation's chance of survival.
Abstract: Why do so many joint ventures fail? Despite the fact that their success is the exception rather than the rule, the literature on why joint venture performance has been so poor remains fragmentary. We address this issue, adopting a transaction-cost economics perspective and modeling joint ventures as governance structures that blend the advantages and drawbacks of both markets and hierarchies. Using a data base on electronics industry ventures and event history analysis, we identify several predictors of joint venture failure and test for their influences. A key finding is that the presence of competition between joint venture partners outside of the agreement significantly impairs chances for the operation's chance of survival. We also find clear evidence that the failure rate of joint ventures is nonmonotonic, rising to a peak in the middle term and then declining. Finally, we compare and contrast predictors of terminations due to failure to those due to acquisition of the joint venture by one of its partners. Our overall conclusions highlight implications for strategic choice theory-building and the management of joint ventures.

855 citations


Journal ArticleDOI
TL;DR: This paper examined the effect of product variety on manufacturing performance, defined here as total labor productivity and consumer-perceived product quality, using data from the International Motor Vehicle Program (M.I.T.) study of 70 assembly plants worldwide.
Abstract: This paper examines the effect of product variety on manufacturing performance, defined here as total labor productivity and consumer-perceived product quality. Using data from the International Motor Vehicle Program (M.I.T.) study of 70 assembly plants worldwide, the paper examines three dimensions of product variety, at fundamental, peripheral, and intermediate levels. The international sample reveals great variation in the distribution of each type of product variety in different regions, reflecting in part different strategies for variety. Furthermore, the impact of different kinds of product variety on performance varies, and is generally much less than the conventional manufacturing wisdom would predict. However, an intermediate type of product variety, here called parts complexity, was found to have a persistent negative impact on productivity. Finally, the study provides partial support for the hypothesis that management policies, in both operations and human resource areas, can facilitate the abs...

620 citations


Journal ArticleDOI
TL;DR: In this article, the authors assess the relative role of national and corporate cultural fit in predicting effective integration between merger partners by examining both international and domestic mergers, and their findings confirm that cultural fit is separate constructs with variable attitudinal and behavioral correlates.
Abstract: While cultural fit has been acknowledged to be a potentially important factor in mergers and acquisitions, the concept has been ill-defined, with no distinction drawn between the national and corporate levels of culture. By examining both international and domestic mergers, the present study assesses the relative role of national and corporate cultural fit in predicting effective integration between merger partners. The innovative, nonparametric co-plot method is introduced, and its main advantage-the simultaneous consideration of both variables and observations-is utilized to explore cultural fit in the two groups of mergers. The findings confirm that national and corporate culture are separate constructs with variable attitudinal and behavioral correlates.

604 citations


Journal ArticleDOI
TL;DR: In this article, three published approximation formulae for selecting the best multiattribute alternative based on rank-ordered weights are evaluated, and all of them are surprisingly efficacious in determining the best multidimensional alternative.
Abstract: Three published approximation formulae for selecting the best multiattribute alternative based on rank-ordered weights are evaluated. All formulae are surprisingly efficacious in determining the best multiattribute alternative. Rank order centroid ROC weights are more accurate than the other rank-based formulae; furthermore, the ROC formula generalizes to incorporate both other forms of partial information about attribute weights and partial rank order information as well. Because a ROC-based analysis is so straightforward and efficacious, it provides an appropriate implementation tool.

592 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce a multistage model of new product development process and show that if product improvements are additive (over stages), it is optimal to allocate maximal time to the most productive development stage.
Abstract: Reduction of new product development cycle time and improvements in product performance have become strategic objectives for many technology-driven firms. These goals may conflict, however, and firms must explicitly consider the tradeoff between them. In this paper we introduce a multistage model of new product development process which captures this tradeoff explicitly. We show that if product improvements are additive (over stages), it is optimal to allocate maximal time to the most productive development stage. We then indicate how optimal time-to-market and its implied product performance targets vary with exogenous factors such as the size of the potential market, the presence of existing and new products, profit margins, the length of the window of opportunity, the firm's speed of product improvement, and competitor product performance. We show that some new product development metrics employed in practice, such as minimizing break-even time, can be sub-optimal if firms are striving to maximize prof...

Journal ArticleDOI
TL;DR: In this article, the authors investigate the impact of winning a quality award on the market value of firms by estimating the mean abnormal change in the stock prices of a sample of firms on the date when information about winning an award was publicly announced.
Abstract: This paper empirically investigates the impact of winning a quality award on the market value of firms by estimating the mean “abnormal” change in the stock prices of a sample of firms on the date when information about winning a quality award was publicly announced. We note that the abnormal returns generated by the quality award winning announcements provide a lower bound for the impact of implementing an effective quality award improvement program. Our results show that the stock market reacts positively to quality award announcements. Statistically significant mean abnormal returns on the day of the announcements ranged from a low of 0.59% to a high of 0.67% depending on the model used to generate the abnormal returns. The reaction was particularly strong for smaller firms (mean abnormal returns ranged from low of 1.16% to a high of 1.26%), and for firms that won awards from independent organizations such as Malcolm Baldrige, Philip Crosby, etc. (mean abnormal returns ranged from a low of 1.31% to a h...

Journal ArticleDOI
TL;DR: In this article, the authors present two direct methods, a pathwise method and a likelihood ratio method, for estimating derivatives of security prices using simulation, and compare them to the standard method of resimulation to estimate derivatives.
Abstract: Simulation has proved to be a valuable tool for estimating security prices for which simple closed form solutions do not exist. In this paper we present two direct methods, a pathwise method and a likelihood ratio method, for estimating derivatives of security prices using simulation. With the direct methods, the information from a single simulation can be used to estimate multiple derivatives along with a security's price. The main advantage of the direct methods over resimulation is increased computational speed. Another advantage is that the direct methods give unbiased estimates of derivatives, whereas the estimates obtained by resimulation are biased. Computational results are given for both direct methods, and comparisons are made to the standard method of resimulation to estimate derivatives. The methods are illustrated for a path independent model (European options), a path dependent model (Asian options), and a model with multiple state variables (options with stochastic volatility).

Journal ArticleDOI
TL;DR: Across monthly and quarterly time series, the neural networks did significantly better than traditional methods in the present experiment, and were particularly effective for discontinuous time series.
Abstract: Neural networks have been advocated as an alternative to traditional statistical forecasting methods. In the present experiment, time series forecasts produced by neural networks are compared with forecasts from six statistical time series methods generated in a major forecasting competition Makridakis et al. [Makridakis, S., A. Anderson, R. Carbone, R. Fildes, M. Hibon, R. Lewandowski, J. Newton, E. Parzen, R. Winkler. 1982. The accuracy of extrapolation time series methods: Results of a forecasting competition. J. Forecasting1 111-153.]; the traditional method forecasts were estimated by experts in the particular technique. The neural networks were estimated using the same ground rules as the competition. Across monthly and quarterly time series, the neural networks did significantly better than traditional methods. As suggested by theory, the neural networks were particularly effective for discontinuous time series.

Journal ArticleDOI
TL;DR: In this article, a generalized model of dynamic pricing and lot-sizing by a reseller who sells a perishable good is formulated, where when it is economic to backlog demand, the reseller can plan for periods of shortage during which demand can be partially backordered.
Abstract: We formulate a generalized model of dynamic pricing and lot-sizing by a reseller who sells a perishable good. We assume that when it is economic to backlog demand, the reseller can plan for periods of shortage during which demand can be partially backordered. When the good is highly perishable, the reseller may need to backlog demand in order to market the good at a reasonable price. We present a simple solution procedure for solving the optimization problem. The procedure entails solving first a single nonlinear equation and then, if required, two nonlinear equations.

Journal ArticleDOI
TL;DR: In this article, the gamble-tradeoff method was proposed for eliciting utilities in decision under risk or uncertainty, which is robust against probability distortions and misconceptions, which constitute a major cause of violations of expected utility and generate inconsistencies in utility elicitation.
Abstract: This paper proposes a new method, the gamble-tradeoff method, for eliciting utilities in decision under risk or uncertainty. The elicitation of utilities, to be used in the expected utility criterion, turns out to be possible even if probabilities are ambiguous or unknown. A disadvantage of the tradeoff method is that a few more questions usually must be asked to clients. Also, the lotteries that are needed are somewhat more complex than in the certainty-equivalent method or in the probability-equivalent method. The major advantage of the tradeoff method is its robustness against probability distortions and misconceptions, which constitute a major cause of violations of expected utility and generate inconsistencies in utility elicitation. Thus the tradeoff method retains full validity under prospect theory, rank-dependent utility, and the combination of the two, i.e., cumulative prospect theory. The tradeoff method is tested for monetary outcomes and for outcomes describing life-duration. We find higher risk aversion for life duration, but the tradeoff method elicits similar curvature of utility. Apparently the higher risk aversion for life duration is due to more pronounced deviations from expected utility.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that causal understanding, innovation team proficiency, emergence and mobilization of new competences, and creation of competitive advantages are necessary precursors for a firm to capture rents from innovation.
Abstract: Four antecedents, it is argued, are necessary precursors for a firm to capture rents from innovation. The antecedents are causal understanding; innovation team proficiency; emergence and mobilization of new competences; and creation of competitive advantages, each of which are conceptually distinct and precisely defined in the paper. These constructs are linked together in a stage model and subsequently operationalized and tested using LISREL. Substantial support is found for the central thesis, that achieving each of the four antecedent processes increases the predicted rents from an innovation project.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated whether firms actually link their process choice to product customization and other competitive priorities as hypothesized, and whether compatible decision patterns lead to better performance, and found that process choice is highly related with the degree of product customization, and also with the emphasis placed on the quality and cost competitive priorities.
Abstract: Process choice, a major part of operations strategy, is a key decision that links operations to business strategy. Hayes and Wheelwright, among others, argue that the emphasis given to product customization and other competitive priorities should agree with process choice. Our empirical study investigates whether firms actually link their process choice to product customization and other competitive priorities as hypothesized, and whether compatible decision patterns lead to better performance. Analysis of data collected from managers at 144 U.S. manufacturing plants shows a strong correlation between process choice, product customization, and competitive priorities. Process choice is highly related with the degree of product customization, and also with the emphasis placed on the quality and cost competitive priorities. Job shops and batch shops tend to have more product customization, higher costs, and higher quality. Some continuous flow shops use part commonality and flexible automation to achieve more customization than would otherwise be expected. Without these initiatives, customization in continuous flow shops results in weak performance.

Journal ArticleDOI
TL;DR: This work provides a stochastic dynamic programming model for this aggregate advance scheduling of elective surgery when the operating rooms' capacity utilization by emergency surgery, as well as by elective procedures, is uncertain.
Abstract: This work concerns the advance scheduling of elective surgery when the operating rooms' capacity utilization by emergency surgery, as well as by elective procedures, is uncertain. New requests for bookings of elective surgery arrive each day. Such procedures preferably would be performed as soon as possible, but admitting too many patients may result in exceeding a day's capacity, possibly necessitating turning away some emergency cases. So the problem facing the hospital at the start of each day is how many of the additional requests for elective surgery to assign for that day. We provide a stochastic dynamic programming model for this aggregate advance scheduling problem. The model has some novel mathematical features. We analyze it and characterize the nature of the optimal policy, which is not necessarily of a control-limit type. Plausible numerical examples which confirm our theoretical results and provide additional insights are reported.

Journal ArticleDOI
TL;DR: In this article, an analytical approach based on rank statistics is presented to the issue of comparing programs within Data Envelopment Analysis (DEA) efficiency evaluation framework, which distinguishes between managerial and programmatic inefficiency and uses the Mann-Whitney rank statistic to evaluate the statistical significance of the differences observed between a treatment program and its control group program after adjusting for differences in managerial efficiency.
Abstract: This paper presents an analytical approach, based on rank statistics, to the issue of comparing programs within Data Envelopment Analysis (DEA) efficiency evaluation framework. The program evaluation procedure distinguishes between managerial and programmatic inefficiency and uses the Mann-Whitney rank statistic to evaluate the statistical significance of the differences observed between a treatment program and its control group program after adjusting for differences in managerial efficiency between the programs. A numerical example, based on the data used to evaluate the educational enhancement of the Program Follow Through, is used to illustrate the proposed statistical procedures.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the computational issues that need to be addressed when incorporating general cutting planes for mixed 0-1 programs into a branch-and-cut framework The cuts they use are of the lift-andproject variety and are of an experimental nature and are settled by comparing alternatives on a set of test problems.
Abstract: We investigate the computational issues that need to be addressed when incorporating general cutting planes for mixed 0-1 programs into a branch-and-cut framework The cuts we use are of the lift-and-project variety Some of the issues addressed have a theoretical answer, but others are of an experimental nature and are settled by comparing alternatives on a set of test problems The resulting code is a robust solver for mixed 0-1 programs We compare it with several existing codes On a wide range of test problems it performs as well as, or better than, some of the best currently available mixed integer programming codes

Journal ArticleDOI
TL;DR: A new version of the Monte Carlo method is introduced that has attractive properties for the numerical valuation of derivatives and promises to be very useful for applications in finance.
Abstract: This paper introduces and illustrates a new version of the Monte Carlo method that has attractive properties for the numerical valuation of derivatives. The traditional Monte Carlo method has proven to be a powerful and flexible tool for many types of derivatives calculations. Under the conventional approach pseudo-random numbers are used to evaluate the expression of interest. Unfortunately, the use of pseudo-random numbers yields an error bound that is probabilistic which can be a disadvantage. Another drawback of the standard approach is that many simulations may be required to obtain a high level of accuracy. There are several ways to improve the convergence of the standard method. This paper suggests a new approach which promises to be very useful for applications in finance. Quasi-Monte Carlo methods use sequences that are deterministic instead of random. These sequences improve convergence and give rise to deterministic error bounds. The method is explained and illustrated with several examples. These examples include complex derivatives such as basket options, Asian options, and energy swaps.

Journal ArticleDOI
TL;DR: In this article, the authors present an inventory control model which includes a Markovian model of the supply system, and the optimal policy has the same structure as in standard models, but its parameters change dynamically to reflect current supply conditions.
Abstract: This paper presents an inventory-control model which includes a Markovian model of the supply system. As that system evolves over time, so do the replenishment leadtimes. The optimal policy has the same structure as in standard models, but its parameters change dynamically to reflect current supply conditions. In this setting, contrary to conventional wisdom, a longer leadtime does not necessarily imply more inventory. The leadtime is important, but so is a concept we call order coverage.

Journal ArticleDOI
TL;DR: In this article, a heuristic based on the optimal solutions of simplified versions of the problem was developed for the catalog sales problem. But given the size of real problems, it is impossible to compute the optimal solution.
Abstract: Catalog sales are among the fastest growing businesses in the U.S. The most important asset a company in this industry has is its list of customers, called the house list. Building a house list is expensive, since the response rate of names from rental lists is low. Cash management therefore plays a central role in this capital intensive business. This paper studies optimal mailing policies in the catalog sales industry when there is limited access to capital. We consider a stochastic environment given by the random responses of customers and a dynamic evolution of the house list. Given the size of real problems, it is impossible to compute the optimal solutions. We therefore develop a heuristic based on the optimal solutions of simplified versions of the problem. The performance of this heuristic is evaluated by comparing its outcome with an upper bound derived for the original problem. Computational experiments show that it behaves satisfactorily. The methodology presented permits the evaluation of potential catalog ventures thus proving useful to entrepreneurs in this industry.

Journal ArticleDOI
TL;DR: In this paper, a heuristic approach for the dynamic multilevel multiitem lotsizing problem in general product structures with multiple constrained resources and setup times is proposed with the help of Lagrangean relaxation.
Abstract: In this paper a heuristic approach for the dynamic multilevel multiitem lotsizing problem in general product structures with multiple constrained resources and setup times is proposed. With the help of Lagrangean relaxation the capacitated multilevel multiitem lotsizing problem is decomposed into several uncapacitated single-item lotsizing problems. From the solutions of these single-item problems lower bounds on the minimum objective function value are derived. Upper bounds are generated by means of a heuristic finite scheduling procedure. The quality of the approach is tested with reference to various problem groups of differing sizes.

Journal ArticleDOI
TL;DR: An approximate procedure based on a time-dependent normal distribution, where the mean and variance are determined by infinite-server approximations is developed, which is effective by making comparisons with the exact numerical solution of the Markovian M t /M/s t model.
Abstract: We consider a multiserver service system with general nonstationary arrival and service-time processes in which s(t), the number of servers as a function of time, needs to be selected to meet projected loads. We try to choose s(t) so that the probability of a delay (before beginning service) hits or falls just below a target probability at all times. We develop an approximate procedure based on a time-dependent normal distribution, where the mean and variance are determined by infinite-server approximations. We demonstrate that this approximation is effective by making comparisons with the exact numerical solution of the Markovian Mt/M/st model.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated a production planning problem in a periodic review environment with variable production capacity, random yields, and uncertain demand, and they proved that the objective function is quasi-convex and that the structure of the optimal policy is characterized by a single critical point for the initial stock level at each period.
Abstract: We investigate a production planning problem in a periodic review environment with variable production capacity, random yields, and uncertain demand. The implications of random yields and variable capacity for lot sizing previously have been explored separately, but not jointly. Many production environments are likely to be subject to both types of uncertainties. To minimize the total discounted expected costs production, holding, and shortage costs, we formulate the problem as a stochastic dynamic program. For the finite-horizon problem, we prove that the objective function is quasi-convex and that the structure of the optimal policy is characterized by a single critical point for the initial stock level at each period. That is, if the initial stock is greater than this critical point, the optimal planned production is zero; otherwise, it is greater than zero. Expressions for solving the critical point and the optimal planned production are obtained. We further show that the solution for the finite-horizon problem converges to that of the infinite-horizon problem.

Journal ArticleDOI
TL;DR: In this paper, the authors apply this approach to a forecasting task and find that to arrive at a forecast decision makers often search their experience for a situation similar to the one at hand and then make small adjustments to this previous situation.
Abstract: Rapid advances in information technology have brought decision makers the mixed blessing of an increasingly vast amount of easily available data. Designers of decision support systems DSS have focused on incorporating the latest technology with little attention to whether these new systems are compatible with the psychology of decision makers. Our premise is that DSS should be designed to take advantage of the distinctive competencies of decision makers while using technology to compensate for their inherent weaknesses. In this study we apply this approach to a forecasting task. We find that to arrive at a forecast decision makers often search their experience for a situation similar to the one at hand and then make small adjustments to this previous situation. Our theoretical model of the performance of this intuitively appealing strategy shows that it performs reasonably well in highly predictable environments, but performs quite poorly in less predictable environments. Results from an experiment confirm these predictions and show that providing decision makers with a simple linear model in combination with a computerized database of historical cases improves performance significantly. We conclude by discussing how these results can be used to help improve forecasting in applied contexts, such as promotion forecasting in the retail grocery industry.

Journal ArticleDOI
TL;DR: In this paper, a modified version of Nelder-Mead, RS + S9, was proposed to optimize the expected response of a stochastic system with additive white noise error.
Abstract: When the Nelder-Mead method is used to optimize the expected response of a stochastic system e.g., an output of a discrete-event simulation model, the simplex-resizing steps of the method introduce risks of inappropriate termination. We give analytical and empirical results describing the performance of Nelder-Mead when it is applied to a response function that incorporates an additive white-noise error, and we use these results to develop new modifications of Nelder-Mead that yield improved estimates of the optimal expected response. Compared to Nelder-Mead, the best performance was obtained by a modified method, RS + S9, in which a the best point in the simplex is reevaluated at each shrink, step and b the simplex is reduced by 10% rather than 50% at each shrink step. In a suite of 18 test problems that were adapted from the MINPACK collection of NETLIB, the expected response at the estimated optimal point obtained by RS + S9 had errors that averaged 15% less than at the original method's estimated optimal point, at an average cost of three times as many function evaluations. Two well-known existing modifications for stochastic responses, the n + 3-rule and the next-to-worst rule, were found to be inferior to the new modification RS + S9.

Journal ArticleDOI
TL;DR: In this article, the authors explore the performance consequences of CEO succession, executive team change, and strategic reorientation in different contexts and conclude that simple CEO succession is positively associated with subsequent performance when context is stable, but significantly more negatively associated with later performance in turbulent contexts.
Abstract: This research explores the performance consequences of CEO succession, executive team change, and strategic reorientation in different contexts. Based on team demography and organization learning ideas, we argue that CEO succession or executive team change enhances incremental organization change, while either strategic reorientation or the combination of CEO succession with executive team change triggers discontinuous organization change. We hypothesize that these contrasting intervention modes are appropriate in different contexts. A longitudinal study of the U.S. cement industry from 1918-1986 demonstrates that simple CEO succession is positively associated with subsequent performance when context is stable, but significantly more negatively associated with subsequent performance in turbulent contexts. Executive team change has significant effects on organization adaptation in both stable and turbulent contexts. Strategic reorientations are negatively associated with subsequent performance in stable contexts, but significantly more positively associated with subsequent performance in turbulent contexts. As a set, these results reinforce a demographic approach to succession research and indicate that CEO succession, executive team change, and reorientation are each distinct and important levers shaping organization adaptation. The impacts of these levers are contingent on organization context.