scispace - formally typeset
Search or ask a question

Showing papers in "Management Science in 1999"


Journal ArticleDOI
TL;DR: In this article, the authors investigated the relationship between the mobility of major patent holders and the localization of technological knowledge through the analysis of patent citations of important semiconductor innovations and found that knowledge localization is specific to only certain regions (particularly Silicon Valley) and that the degree of localization varies across regions.
Abstract: Knowledge, once generated, spills only imperfectly among firms and nations. We posit that since institutions and labor networks vary by region, there should be regional variations in the localization of spillovers. We investigate the relationship between the mobility of major patent holders and the localization of technological knowledge through the analysis of patent citations of important semiconductor innovations. We find that knowledge localization is specific to only certain regions (particularly Silicon Valley) and that the degree of localization varies across regions. By analyzing data on the interfirm mobility of patent holders, we empirically show that the interfirm mobility of engineers influences the local transfer of knowledge. The flow of knowledge is embedded in regional labor networks.

2,419 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the relationship between IT investments and Tobin's q, a financial market-based measure of firm performance, and found that IT investments had a significantly positive association with Tobin q values.
Abstract: Despite increasing anecdotal evidence that information technology (IT) assets contribute to firm performance and future growth potential of firms, the empirical results relating IT investments to firm performance measures have been equivocal. However, the bulk of the studies have relied exclusively on accounting-based measures of firm performance, which largely tend to ignore IT's contribution to performance dimensions such as strategic flexibility and intangible value. In this paper, we use Tobin's q, a financial market-based measure of firm performance and examine the association between IT investments and firm q values, after controlling for a variety of industry factors and firm-specific variables. The results based on data from 1988-1993 indicate that, in all of the five years, the inclusion of the IT expenditure variable in the model increased the variance explained in q significantly. The results also showed that, for all five years, IT investments had a significantly positive association with Tobin's q value. Our results are consistent with the notion that IT contributes to a firm's future performance potential, which a forward-looking measure such as the q is better able to capture.

1,176 citations


Journal ArticleDOI
TL;DR: In this paper, information flow between a supplier and a retailer in a two-echelon model that captures the capacitated setting of a typical supply chain is considered, and the authors estimate the savings at the supplier due to information flow and study when information is most beneficial.
Abstract: We incorporate information flow between a supplier and a retailer in a two-echelon model that captures the capacitated setting of a typical supply chain We consider three situations: (1) a traditional model where there is no information to the supplier prior to a demand to him except for past data; (2) the supplier knows the (s, S) policy used by the retailer as well as the end-item demand distribution; and (3) the supplier has full information about the state of the retailer Order up-to policies continue to be optimal for models with information flow for the finite horizon, the infinite horizon discounted and the infinite horizon average cost cases Study of these three models enables us to understand the relationships between capacity, inventory, and information at the supplier level, as well as how they are affected by the retailer's (S - s) values and end-item demand distribution We estimate the savings at the supplier due to information flow and study when information is most beneficial

932 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a supply chain consisting of two independent agents, a supplier e.g., a manufacturer and a retailer, the latter serving an uncertain market demand.
Abstract: Consider a supply chain consisting of two independent agents, a supplier e.g., a manufacturer and its customer e.g., a retailer, the latter in turn serving an uncertain market demand. To reconcile manufacturing/procurement time lags with a need for timely response to the market, such supply chains often must commit resources to production quantities based on forecasted rather than realized demand. The customer typically provides a planning forecast of its intended purchase, which does not entail commitment. Benefiting from overproduction while not bearing the immediate costs, the customer has incentive to initially overforecast before eventually purchasing a lesser quantity. The supplier must in turn anticipate such behavior in its production quantity decision. This individually rational behavior results in an inefficient supply chain. This paper models the incentives of the two parties, identifying causes of inefficiency and suggesting remedies. Particular attention is given to the Quantity Flexibility QF contract, which couples the customer's commitment to purchase no less than a certain percentage below the forecast with the supplier's guarantee to deliver up to a certain percentage above. Under certain conditions, this method can allocate the costs of market demand uncertainty so as to lead the individually motivated supplier and customer to the systemwide optimal outcome. We characterize the implications of QF contracts for the behavior and performance of both parties, and the supply chain as a whole.

902 citations


Journal ArticleDOI
TL;DR: In this article, the authors study the optimal bundling strategies for a multiproduct monopolist, and find that bundling very large numbers of unrelated information goods can be surprisingly profitable.
Abstract: We study the strategy of bundling a large number of information goods, such as those increasingly available on the Internet, and selling them for a fixed price. We analyze the optimal bundling strategies for a multiproduct monopolist, and we find that bundling very large numbers of unrelated information goods can be surprisingly profitable. The reason is that the law of large numbers makes it much easier to predict consumers' valuations for a bundle of goods than their valuations for the individual goods when sold separately. As a result, this "predictive value of bundling" makes it possible to achieve greater sales, greater economic efficiency, and greater profits per good from a bundle of information goods than can be attained when the same goods are sold separately. Our main results do not extend to most physical goods, as the marginal costs of production for goods not used by the buyer typically negate any benefits from the predictive value of large-scale bundling. While determining optimal bundling strategies for more than two goods is a notoriously difficult problem, we use statistical techniques to provide strong asymptotic results and bounds on profits for bundles of any arbitrary size. We show how our model can be used to analyze the bundling of complements and substitutes, bundling in the presence of budget constraints, and bundling of goods with various types of correlations and how each of these conditions can lead to limits on optimal bundle size. In particular we find that when different market segments of consumers differ systematically in their valuations for goods, simple bundling will no longer be optimal. However, by offering a menu of different bundles aimed at each market segment, bundling makes traditional price discrimination strategies more powerful by reducing the role of unpredictable idiosyncratic components of valuations. The predictions of our analysis appear to be consistent with empirical observations of the markets for Internet and online content, cable television programming, and copyrighted music.

887 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the performance of the top 55 U.S. commercial banks via a two-stage production process that separates profitability and marketability and uncovered substantial performance inefficiency in both dimensions.
Abstract: Utilizing recent developments in data envelopment analysis (DEA), this paper examines the performance of the top 55 U.S. commercial banks via a two-stage production process that separates profitability and marketability. Substantial performance inefficiency is uncovered in both dimensions. Relatively large banks exhibit better performance on profitability, whereas smaller banks tend to perform better with respect to marketability. New contextdependent performance measures are defined for profitability and marketability which employ a DEA stratification model and a DEA attractiveness measure. When combined with the original DEA measure, the context-dependent performance measure better characterizes the profitability and marketability of 55 U.S. commercial banks. The new approach identifies areas for improved bank performance over the two-stage production process. The effect of acquisition on efficiency and attractiveness is also examined.

878 citations


Journal ArticleDOI
TL;DR: In this paper, the authors interviewed over one-hundred individuals about all the pros and cons of using Internet commerce that they experienced or envisioned, and the results were organized into twenty-five categories of objectives that were influenced by Internet purchases.
Abstract: Internet commerce has the potential to offer customers a better deal compared to purchases by conventional methods in many situations. To make this potential a reality, businesses must focus on the values of their customers. We interviewed over one-hundred individuals about all the pros and cons of using Internet commerce that they experienced or envisioned. The results were organized into twenty-five categories of objectives that were influenced by Internet purchases. These categories were separated into means objectives and fundamental objectives used to describe the bottom line consequences of concern to customers. These results are applicable to designing an Internet commerce system for a business, creating and redesigning products, and increasing value to customers. The set of fundamental objectives also provides the foundation for developing a quantitative model of customer values.

827 citations


Journal ArticleDOI
TL;DR: A novel theoretic al and empirical approach to analyzing processes at various levels of abstraction that allows people to explicitly represent the similarities among related processes and to easily find or generate sensible alternatives for how a given process could be performed.
Abstract: This paper describes a novel theoretic al and empirical approach to tasks such as business process redesign and knowledge management. The project involves collecting examples of how different organizations perform similar processes, and organizing these examples in an on-line "process handbook." The handbook is intended to help people: (1) redesign existing organizational processes, (2) invent new organizational processes (especially ones that take advantage of information technology), and (3) share ideas about organizational practices. A key element of the work is an approach to analyzing processes at various levels of abstraction, thus capturing both the details of specific processes as well as the "deep structure" of their similarities. This approach uses ideas from computer science about inheritance and from coordination theory about managing dependencies. A primary advantage of the approach is that it allows people to explicitly represent the similarities (and differences) among related processes and to easily find or generate sensible alternatives for how a given process could be performed. In addition to describing this new approach, the work reported here demonstrates the basic technical feasibility of these ideas and gives one example of their use in a field study.

709 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider how a firm's resource base affects the choice of industries into which the firm diversifies and show that the predictive power of the "resource-based view of the firm" is greatly improved when resources are measured at a finer level.
Abstract: This study considers how a firm's resource base affects the choice of industries into which the firm diversifies. It offers two main extensions of prior research. First, it operationalizes technological resources at a more detailed level than in prior studies, thereby enabling a more stringent analysis of the direction of diversification. This analysis shows that the predictive power of the "resource-based view of the firm" is greatly improved when resources are measured at a finer level. Second, the study integrates principles from transaction cost economics into resource-based predictions concerning diversification. In particular, it tests the common assumption that rent-generating resources are too asset specific to allow contracting. The findings point to circumstances where resources can be and are exploited through contracting rather than through diversification.

612 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a two-stage serial supply chain with stationary stochastic demand and fixed transportation times, and compare the policies chosen under this competitive regime to those selected to minimize total supply chain costs, i.e., the optimal solution.
Abstract: We investigate a two-stage serial supply chain with stationary stochastic demand and fixed transportation times. Inventory holding costs are charged at each stage, and each stage may incur a consumer backorder penalty cost, e.g. the upper stage (the supplier) may dislike backorders at the lower stage (the retailer). We consider two games. In both, the stages independently choose base stock policies to minimize their costs. The games differ in how the firms track their inventory levels (in one, the firms are committed to tracking echelon inventory; in the other they track local inventory). We compare the policies chosen under this competitive regime to those selected to minimize total supply chain costs, i.e., the optimal solution. We show that the games (nearly always) have a unique Nash equilibrium, and it differs from the optimal solution. Hence, competition reduces efficiency. Furthermore, the two games' equilibria are different, so the tracking method influences strategic behavior. We show that the system optimal solution can be achieved as a Nash equilibrium using simple linear transfer payments. The value of cooperation is context specific: In some settings competition increases total cost by only a fraction of a percent, whereas in other settings the cost increase is enormous. We also discuss Stackelberg equilibria.

541 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that not accounting for endogeneity may result in a substantial bias in the parameter estimates of random utility models with scanner panel data, and they test whether these endogeneity problems are important enough to warrant consideration when estimating random utility model with scanner data.
Abstract: Applications of random utility models to scanner data have been widely presented in marketing for the last 20 years. One particular problem with these applications is that they have ignored possible correlations between the independent variables in the deterministic component of utility (price, promotion, etc.) and the stochastic component or error term. In fact, marketing-mix variables, such as price, not only affect brand purchasing probabilities but are themselves endogenously set by marketing managers. This work tests whether these endogeneity problems are important enough to warrant consideration when estimating random utility models with scanner panel data. Our results show that not accounting for endogeneity may result in a substantial bias in the parameter estimates.

Journal ArticleDOI
TL;DR: In this article, the authors consider a supply chain in which a product must pass through multiple sites located in series before it is finally delivered to outside customers, and they show that a performance measurement scheme involving transfer pricing, consignment, shortage reimbursement, and an additional backlog penalty at the last downstream site satisfies all these properties.
Abstract: Consider a supply chain in which a product must pass through multiple sites located in series before it is finally delivered to outside customers. Incentive problems may arise in this system when decisions are delegated to corresponding site managers, each maximizing his/her own performance metric. From the overall system's point of view, the decentralized supply chain may not be as efficient as the centralized one. In practice, alternative performance mechanisms are often used to align the incentives of the different managers in a supply chain. This paper discusses the cost conservation, incentive compatibility, and informational decentralizability properties of these mechanisms. In particular, for a special type of supply chain, we show that a performance measurement scheme involving transfer pricing, consignment, shortage reimbursement, and an additional backlog penalty at the last downstream site satisfies all these properties.

Journal ArticleDOI
TL;DR: This paper studied how decision makers choose when faced with multiple plays of a gametheoretic investment or investment, and found that subjects show a sensitivity to the amount to lose on a single trial, holding the distribution of returns for the portfolio constant; that is, they display myopic loss aversion.
Abstract: We study how decision makers choose when faced with multiple plays of a gamble or investment. When evaluating multiple plays of a simple mixed gamble, a chance to win x or lose y, subjects show a sensitivity to the amount to lose on a single trial, holding the distribution of returns for the portfolio constant; that is, they display "myopic loss aversion." Many subjects who decline multiple plays of such a gamble will accept it when shown the resulting distribution. This analysis is applied to the problem of retirement investing. We show that workers invest more of their retirement savings in stocks if they are shown long-term (rather than one-year) rates of return.

Journal ArticleDOI
TL;DR: In this paper, a retailer must construct an assortment for a category of product variants distinguished by some attribute such as color or flavor, i.e., select a subset variants to stock and determine purchase quantities for each offered variant.
Abstract: Consider a category of product variants distinguished by some attribute such as color or flavor. A retailer must construct an assortment for the category, i.e., select a subset variants to stock and determine purchase quantities for each offered variant. We analyze this problem using a multinomial logit model to describe the consumer choice process and a newsboy model to represent the retailer's inventory cost. We show that the optimal assortment has a simple structure and provide insights on how various factors affect the optimal level of assortment variety. We also develop a formal definition of the level of fashion in a category using the theory of majorization and examine its implications for category profits.

Journal ArticleDOI
TL;DR: In this article, the authors consider a simple supply chain in which a single supplier sells to several downstream retailers, and the supplier allocates capacity using a publicly known allocation mechanism, a mapping from retailer orders to capacity assignments.
Abstract: We consider a simple supply chain in which a single supplier sells to several downstream retailers. The supplier has limited capacity, and retailers are privately informed of their optimal stocking levels. If retailer orders exceed available capacity, the supplier allocates capacity using a publicly known allocation mechanism, a mapping from retailer orders to capacity assignments. We show that a broad class of mechanisms are prone to manipulation: Retailers will order more than they need to gain a more favorable allocation. Another class of mechanisms induces the retailers to order exactly their needs, thereby revealing their private information. However, there does not exist a truth-inducing mechanism that maximizes total retailer profits. We also consider the supplier's capacity choice. We show that a manipulable mechanism may lead the supplier to choose a higher level of capacity than she would under a truth-inducing mechanism. Nevertheless, her choice will appear excessively restrictive relative to the prevailing distribution of orders. Furthermore, switching to a truth-inducing mechanism can lower profits for the supplier, the supply chain, and even her retailers. Hence, truth-telling is not a universally desirable goal.

Journal ArticleDOI
TL;DR: In this paper, a unified approach, referred to as the AR-IDEA model, is achieved which includes not only imprecise data capabilities but also assurance region and cone-ratio envelopment concepts.
Abstract: Data Envelopment Analysis (DEA) is a nonparametric approach to evaluating the relative efficiency of decision making units (DMUs) that use multiple inputs to produce multiple outputs An assumption underlying DEA is that all the data assume the form of specific numerical values In some applications, however, the data may be imprecise For instance, some of the data may be known only within specified bounds, while other data may be known only in terms of ordinal relations DEA with imprecise data or, more compactly, the Imprecise Data Envelopment Analysis (IDEA) method developed in this paper permits mixtures of imprecisely- and exactly-known data, which the IDEA models transform into ordinary linear programming forms This is carried even further in the present paper to comprehend the now extensively employed Assurance Region (AR) concepts in which bounds are placed on the variables rather than the data We refer to this approach as AR-IDEA, because it replaces conditions on the variables with transformations of the data and thus also aligns the developments we describe in this paper with what are known as cone-ratio envelopments in DEA As a result, one unified approach, referred to as the AR-IDEA model, is achieved which includes not only imprecise data capabilities but also assurance region and cone-ratio envelopment concepts

Journal ArticleDOI
TL;DR: This paper investigated whether VCs' assessment policies of new venture survival are consistent with those arising from the strategy literature (using two established strategy perspectives) and found that VCs" assessment policies are predominantly consistent with strategy scholars' proposed by strategy scholars, providing insight into why VCs consider certain criteria in their assessment and why some criteria are more important than others.
Abstract: This study investigates whether VCs' assessment policies of new venture survival are consistent with those arising from the strategy literature (using two established strategy perspectives) Strategy scholars suggest the nature of the markets, competition, and decisions made by the management team affect a new venture's survival chances The findings demonstrate that VCs' assessment policies are predominantly consistent with those proposed by strategy scholars-providing insight into why VCs consider certain criteria in their assessment of new venture survival as well as why some criteria are more important in their assessment than others Through this increased understanding of venture capitalists' decision making, entrepreneurs seeking capital may be better able to address their requests for funding to those criteria venture capitalists find most critical to the survival of a new venture Venture capitalists may use these findings to better understand their own decision making process, which, in turn, provides the opportunity to increase evaluation efficiency

Journal ArticleDOI
TL;DR: In this paper, the authors used data from 41 steel production lines to assess the effects of Japanese and U.S. human resource management (HRM) practices on worker productivity.
Abstract: This study uses personally collected data from 41 steel production lines to assess the effects of Japanese and U.S. human resource management (HRM) practices on worker productivity. The Japanese production lines employ a common system of HRM practices including: problem-solving teams, extensive orientation, training throughout employees' careers, extensive information sharing, rotation across jobs, employment security, and profit sharing. A majority of U.S. plants now have one or two features of this system of HRM practices, but only a minority have a comprehensive system of innovative work practices that parallels the full system of practices found among the Japanese manufacturers. We find that the Japanese lines are significantly more productive than the U.S. lines. However, U.S. manufacturers that have adopted a full system of innovative HRM practices patterned after the Japanese system achieve levels of productivity and quality equal to the performance of the Japanese manufacturers. This study's evidence helps reconcile conflicting views about the effectiveness of adopting Japanese-style worker involvement schemes in the United States. United States manufacturers that have adopted a definition of employee participation that extends only to problem-solving teams or information sharing do not see large improvements in productivity. However, U.S. manufacturers that adopt a broader definition of participation that mimics the full Japanese HRM system see substantial performance gains.

Journal ArticleDOI
TL;DR: In this article, the authors investigated JIT implementation differences between small and large U.S. manufacturers and found that the frequencies of the 10 JIT management practices implemented differ between the two groups of manufacturer size and an association exists between the JIT practices implemented and manufacturer size.
Abstract: Since the early 1980s, the diffusion of Just-In-Time (JIT) manufacturing from Japanese manufacturers to U.S. manufacturers has progressed at an accelerated rate. At this stage of the diffusion process, JIT implementations are more common and more advanced in large U.S. manufacturers than in small; consequently, U.S. businessmen's understanding of issues associated with JIT implementations in large manufacturers is more developed than that of small manufacturers. When small manufacturers represent about 96 percent of all U.S. manufacturers, investigation of JIT impleme ntations in small, as well as large, manufacturers is warranted. This survey study investigates JIT implementation differences between small and large U.S. manufacturers. Ten management practices that constitute the JIT concept are used to examine implementation of JIT manufacturing systems. Odds ratio were constructed to determine if an association exists between implemented versus not implemented and manufacturer size for each JIT practice. Ten changes in performance attributed to JIT implementation are also assessed and examined in the study. Logistic regression models are used to examine the relationships between implementation status of each of the JIT practices and of each of the changes in performance in small and large manufacturers. The results of the study show that the frequencies of the 10 JIT management practices implemented differ between the two groups of manufacturer size, and an association exists between the JIT practices implemented and manufacturer size. Moreover, the changes in performance attributed to JIT implementation vary, depending on implementation status of specific JIT management practices and manufacturer size.

Journal ArticleDOI
TL;DR: The simulations show that bundling options can reduce the amount of buffer capacity required, and that random variation is more pernicious to productivity than product variety, supporting the efforts of some auto makers to aggressively attack the causes of random variation.
Abstract: This study examines the impact of product variety on automobile assembly plant performance using data from GM's Wilmington, Delaware plant, together with simulation analyses of a more general auto assembly line. We extend prior product variety studies by providing evidence on the magnitude of variety-related production losses, the mechanisms through which variety impacts performance, and the effects of option bundling and labor staffing policies on the costs of product variety. The empirical analyses indicate that greater day-to-day variability in option content (but not mean option content per car) has a significant adverse impact on total labor hours per car produced, overhead hours per car produced, assembly line downtime, minor repair and major rework, and inventory levels, but does not have a significant short-run impact on total direct labor hours. However, workstations with higher variability in option content have greater slack direct labor resources to buffer against process time variation, introducing an additional cost of product variety. The simulation results support these findings in that once each workstation is optimally buffered against process time variation, product variety has an insignificant impact on direct assembly labor. The simulations also show that bundling options can reduce the amount of buffer capacity required, and that random variation is more pernicious to productivity than product variety, supporting the efforts of some auto makers to aggressively attack the causes of random variation.

Journal ArticleDOI
TL;DR: An analytic model of component sharing is developed and it is shown that the optimal number of brake rotors is a function of the range of vehicle weights, sales volume, fixed component design and tooling costs, variable costs, and the variation in production volume across the models of the product line.
Abstract: Product variety in many industri es has increased steadily throughout this century. Component sharing-using the same version of a component across multiple products-is increasingly viewed by companies as a way to offer high variety in the marketplace while retaining low variety in their operations. Yet, despite the popularity of component sharing in industry, little is known about how to design an effective component-sharing strategy or about the factors that influence the success of such a strategy. In this paper we critically examine component sharing using automotive front brakes as an example. We consider three basic questions: (1) What are the key drivers and trade-offs of component-sharing decisions? (2) How much variation exists in actual component-sharing practice? and (3) How can this variation be explained? To answer these questions, we develop an analytic model of component sharing and show through empirical testing that this model explains much of the variation in sharing practice for automotive braking systems. We find that the optimal number of brake rotors is a function of the range of vehicle weights, sales volume, fixed component design and tooling costs, variable costs, and the variation in production volume across the models of the product line. We conclude with a discussion of the general managerial implications of our findings.

Journal ArticleDOI
TL;DR: A combination of two new algorithms recently proved to outperform all previous methods for the exact solution of the 0-1 Knapsack Problem, where, in additi on, valid inequalities are generated and surrogate relaxed, and a new initial core problem is adopted.
Abstract: Two new algorithms recently proved to outperform all previous methods for the exact solution of the 0-1 Knapsack Problem. This paper presents a combination of such approaches, where, in additi on, valid inequalities are generated and surrogate relaxed, and a new initial core problem is adopted. The algorithm is able to solve all classical test instances, with up to 10,000 variables, in less than 0.2 seconds on a HP9000-735/99 computer. The C language implementation of the algorithm is available on the internet.

Journal ArticleDOI
TL;DR: In this paper, a comparative analysis of possible postponement strategies in a two-stage decision model where firms make three decisions: capacity investment, production (inventory) quantity, and price is presented.
Abstract: This article presents a comparative analysis of possible postponement strategies in a two-stage decision model where firms make three decisions: capacity investment, production (inventory) quantity, and price. Typically, investments are made while the demand curve is uncertain. The strategies differ in the timing of the operational decisions relative to the realization of uncertainty. We show how competition, uncertainty, and the timing of operational decisions influence the strategic investment decision of the firm and its value. In contrast to production postponement, price postponement makes the investment and production (inventory) decisions relatively insensitive to uncertainty. This suggests that managers can make optimal capacity decisions by deterministic reasoning if they have some price flexibility. Under price postponement, additional postponement of production has relatively small incremental value. Therefore, it may be worthwhile to consider flexible ex-post pricing before production postponement reengineering. While more postponement increases firm value, it is counterintuitive that this also makes the optimal capacity decision more sensitive to uncertainty. We highlight the different impact of more timely information, which leads to higher investment and inventories, and of reduced demand uncertainty, which decreases investment and inventories. Our analysis suggests appropriateness conditions for simple make-to-stock and make-to-order strategies. We also present technical sufficiency and uniqueness conditions. Under price postponement, these results extend to oligopolistic and perfect competition for which pure equilibria are derived. Interestingly, the relative value of operational postponement techniques seems to increase as the industry becomes more competitive.

Journal ArticleDOI
TL;DR: In this paper, the authors analyze and present outsourcing conditions for three contract types: (1) price-only contracts where an ex-ante transfer price is set for each unit supplied by the subcontractor, (2) incomplete contracts, where both parties negotiate over the subcontracting transfer, and (3) state-dependent contracts for which they show an equivalence result.
Abstract: We value the option of subcontracting to improve financial performance and system coordination by analyzing a competitive stochastic investment game with recourse. The manufacturer and subcontractor decide separately on their capacity investment levels. Then demand uncertainty is resolved and both parties have the option to subcontract when deciding on their production and sales. We analyze and present outsourcing conditions for three contract types: (1) price-only contracts where an ex-ante transfer price is set for each unit supplied by the subcontractor; (2) incomplete contracts, where both parties negotiate over the subcontracting transfer; and (3) state-dependent price-only and incomplete contracts for which we show an equivalence result. While subcontracting with these three contract types can coordinate production decisions in the supply system, only state-dependent contracts can eliminate all decentralization costs and coordinate capacity investment decisions. The minimally sufficient price-only contract that coordinates our supply chain specifies transfer prices for a small number (6 in our model) of contingent scenarios. Our game-theoretic model allows the analysis of the role of transfer prices and of the bargaining power of buyer and supplier. We find that sometimes firms may be better off leaving some contract parameters unspecified ex-ante and agreeing to negotiate ex-post. Also, a price-focused strategy for managing subcontractors can backfire because a lower transfer price may decrease the manufacturer's profit. Finally, as with financial options, the option value of subcontracting increases as markets are more volatile or more negatively correlated.

Journal ArticleDOI
TL;DR: In this paper, the authors describe an alternative approach that uses a copula to construct joint distributions and pairwise correlations to incorporate dependence among the variables, which is designed specifically to permit the use of an expert's subjective judgments of marginal distributions and correlations.
Abstract: The construction of a probabilistic model is a key step in most decision and risk analyses. Typically this is done by defining a joint distribution in terms of marginal and conditional distributions for the model's random variables. We describe an alternative approach that uses a copula to construct joint distributions and pairwise correlations to incorporate dependence among the variables. The approach is designed specifically to permit the use of an expert's subjective judgments of marginal distributions and correlations. The copula that underlies the multivariate normal distribution provides the basis for modeling dependence, but arbitrary marginals are allowed. We discuss how correlations can be assessed using techniques that are familiar to decision analysts, and we report the results of an empirical study of the accuracy of the assessment methods. The approach is demonstrated in the context of a simple example, including a study of the sensitivity of the results to the assessed correlations.

Journal ArticleDOI
TL;DR: In this article, the authors developed a framework for combining strategic benchmarking with efficiency benchmarking of the services offered by bank branches, in which a cascade of efficiency benchmarks are developed guided by the service-profit chain.
Abstract: We develop a framework for combining strategic benchmarking with efficiency benchmarking of the services offered by bank branches. In particular, a cascade of efficiency benchmarking models is developed guided by the service-profit chain. Three models-based on the nonparametric technique of Data Envelopment Analysis-are developed in order to implement the framework in a practical setting: (i) an operational efficiency model, (ii) a service quality efficiency model, and (iii) a profitability efficiency model. The use of the models is illustrated using data from the branches of a commercial bank. Empirical results indicate that we gain superior insights by analyzing simultaneously the design of operations together with the quality of the provided services and profitability, rather than by benchmarking these three dimensions separately. Relationships are also established between operational efficiency and profitability, and between operational efficiency and service quality.

Journal ArticleDOI
Fangruo Chen1
TL;DR: In this paper, the authors consider a supply chain whose members are divisions of the same firm and characterize the optimal decision rules for the divisions under the assumption that the division managers share a common goal to optimize the overall performance of the supply chain (i.e., they act as a team).
Abstract: We consider a supply chain whose members are divisions of the same firm. The divisions are managed by different individuals with only local inventory information. Both the material and information flows in the supply chain are subject to delays. Under the assumption that the division managers share a common goal to optimize the overall performance of the supply chain (i.e., they act as a team), we characterize the optimal decision rules for the divisions. The team solution reveals the role of information leadtimes in determining the optimal replenishment strategies. We then show that the owner of the firm can manage the divisions as cost centers without compromising the systemwide performance. This is achieved by using an incentive-compatible measurement scheme based on accounting inventory levels. Finally, we investigate the impact of irrational behavior on supply chain performance and demonstrate that it is important for the upstream members of the supply chain to have access to accurate customer demand information.

Journal ArticleDOI
TL;DR: This paper considers a relatively simple hybrid system, related to a single component durable product, and presents a methodology to analyse a PUSH control strategy (in which all returned products are remanufactured as early as possible) and a PULL control strategy
Abstract: This paper is on production planning and inventory control in systems where manufacturing and remanufacturing operations occur simultaneously. Typical for these hybrid systems is, that both the output of the manufacturing process and the output of the remanufacturing process can be used to fulfill customer demands. Here, we consider a relatively simple hybrid system, related to a single component durable product. For this system, we present a methodology to analyse a PUSH control strategy (in which all returned products are remanufactured as early as possible) and a PULL control strategy (in which all returned products are remanufactured as late as is convenient). The main contributions of this paper are (i) to compare traditional systems without remanufacturing to PUSH and to PULL controlled systems with remanufacturing, and (ii) to derive managerial insights into the inventory related effects of remanufacturing.

Journal ArticleDOI
TL;DR: In this article, the authors investigated two sources of nonlinearity of decision weights: subadditivity of probability judgments and the overweighting of small probabilities and underweighting of medium and large probabilities.
Abstract: In most real-world decisions, consequences are tied explicitly to the outcome of events. Previous studies of decision making under uncertainty have indicated that the psychological weight attached to an event, called a decision weight, usually differs from the probability of that event. We investigate two sources of nonlinearity of decision weights: subadditivity of probability judgments, and the overweighting of small probabilities and underweighting of medium and large probabilities. These two sources of nonli nearity are combined into a two-stage model of choice under uncertainty. In the first stage, events are taken into subjective probability judgments, and the second stage takes probability judgments into decision weights. We then characterize the curvature of the decision weights by extending a condition employed by Wu and Gonzalez (1996) in the domain of risk to the domain of uncertainty and show that the nonlinearity of decision weights can be decomposed into subadditivity of probability judgments and the curvature of the probability weighting function. Empirical tests support the proposed two-stage model and indicate that decision weights are concave then convex. More specifically, our results lend support for a new property of subjective probability ju dgments, interior additivity (subadditive at the boundaries, but additive away from the boundaries), and show that the probability weighting function is inverse S-shaped as in Wu and Gonzalez (1996).

Journal ArticleDOI
TL;DR: In this article, the authors developed a procedure and the requisite theory for incorporating preference information in a novel way in the efficiency analysis of decision making units, which is defined in the spirit of Data Envelopment Analysis (DEA), complemented with decision maker's preference information concerning the desirable structure of inputs and outputs.
Abstract: We develop a procedure and the requisite theory for incorporating preference information in a novel way in the efficiency analysis of Decision Making Units. The efficiency of Deci sion Making Units is defined in the spirit of Data Envelopment Analysis (DEA), complemented with Decision Maker's preference information concerning the desirable structure of inputs and outputs. Our procedure begins by aiding the Decision Maker in searching for the most preferred combination of inputs and outputs of Decision Making Units (for short, Most Preferred Solution) which are efficient in DEA. Then, assuming that the Decision Maker's Most Preferred Solution maximizes his/her underlying (unknown) value function, we approximate the indifference contour of the value function at this point with its possible tangent hyperplanes. Value Efficiency scores are then calculated for each Decision Making Unit comparing the inefficient units to units having the same value as the Most Preferred Solution. The resulting Value Efficiency scores are optimistic approximations of the true scores. The procedure and the resulting efficiency scores are immediately applicable to solving practical problems.