scispace - formally typeset
Search or ask a question

Showing papers in "ERIM report series research in management Erasmus Research Institute of Management in 2009"


Posted Content
TL;DR: Both the theoretical and the empirical results indicate that co-occurrence data can best be normalized using a probabilistic measure, providing strong support for the use of the association strength in scientometric research.
Abstract: textIn scientometric research, the use of co-occurrence data is very common. In many cases, a similarity measure is employed to normalize the data. However, there is no consensus among researchers on which similarity measure is most appropriate for normalization purposes. In this paper, we theoretically analyze the properties of similarity measures for co-occurrence data, focusing in particular on four well-known measures: the association strength, the cosine, the inclusion index, and the Jaccard index. We also study the behavior of these measures empirically. Our analysis reveals that there exist two fundamentally different types of similarity measures, namely set-theoretic measures and probabilistic measures. The association strength is a probabilistic measure, while the cosine, the inclusion index, and the Jaccard index are set-theoretic measures. Both our theoretical and our empirical results indicate that co-occurrence data can best be normalized using a probabilistic measure. This provides strong support for the use of the association strength in scientometric research.

291 citations


Posted Content
TL;DR: The aim in this paper is to provide an overview of the functionality of VOSviewer and to elaborate on the technical implementation of specific parts of the program.
Abstract: textWe present VOSviewer, a computer program that we have developed for constructing and viewing bibliometric maps. VOSviewer combines the VOS mapping technique and an advanced viewer into a single easy-to-use computer program that is freely available to the bibliometric research community. Our aim in this paper is to provide an overview of the functionality of VOSviewer and to elaborate on the technical implementation of specific parts of the program.

217 citations


Posted Content
TL;DR: In this article, the authors developed a model using the theory of branching processes to predict how many customers a viral marketing campaign will reach, and how marketers can influence this process through marketing activities.
Abstract: textIn a viral marketing campaign an organization develops a marketing message, and stimulates customers to forward this message to their contacts. Despite its increasing popularity, there are no models yet that help marketers to predict how many customers a viral marketing campaign will reach, and how marketers can influence this process through marketing activities. This paper develops such a model using the theory of branching processes. The proposed Viral Branching Model allows customers to participate in a viral marketing campaign by 1) opening a seeding email from the organization, 2) opening a viral email from a friend, and 3) responding to other marketing activities such as banners and offline advertising. The model parameters are estimated using individual-level data that become available in large quantities already in the early stages of viral marketing campaigns. The Viral Branching Model is applied to an actual viral marketing campaign in which over 200,000 customers participated during a six-week period. The results show that the model quickly predicts the actual reach of the campaign. In addition, the model proves to be a valuable tool to evaluate alternative what-if scenarios.

184 citations


Posted Content
TL;DR: In this paper, a dyadic model is proposed to explain why LMX disagreement can stem from differences in both parties' ILTs and IFTs, as well as differences in perceptions of own and other's behavior.
Abstract: textWhile Leader-Member Exchange (LMX) research shows that leaders engage in different kinds of relationships with different followers, it remains somewhat of an enigma why one and the same relationship is often rated differently by a leader and the respective follower. We seek to fill that conceptual void by explaining when and why such LMX disagreement is likely to occur. To do so, we reconsider antecedents of LMX quality perceptions and outline how each party’s LMX quality perception is primarily dependent on the perceived contributions of the other party, moderated by perceived own contributions. We then integrate the notion of Implicit Leadership and Followership Theories (ILTs and IFTs) to argue that the currencies of contributions differ between leaders and followers. This dyadic model sets the stage to explain that LMX disagreement can stem from (1) differences in both parties’ ILTs as well as both parties’ IFTs, but also from (2) differences in perceptions of own and other’s behavior. We conclude by discussing communication as a means of overcoming LMX disagreement and propose an array of potential studies along the lines of our conceptualization.

78 citations


Posted Content
TL;DR: In this article, the authors developed a simple test to assess whether horizontal spillover effects from multinational to domestic firms are endogenous to the market structure generated by the incremental entry of the same multinationals.
Abstract: textWe develop a simple test to assess whether horizontal spillover effects from multinational to domestic firms are endogenous to the market structure generated by the incremental entry of the same multinationals. In particular, we analyze the performance of a panel of 10,650 firms operating in Romania in the period 1995-2001. Controlling for the simultaneity bias in productivity estimates through semi-parametric techniques, we find that changes in domestic firms’ TFP are positively related to the first foreign investment in a specific industry and region, but get significantly weaker and become negative as the number of multinationals that enter in the considered industry/region crosses a specific threshold. These changing marginal effects can explain the lack of horizontal spillovers arising in traditional model designs. We also find these effects to vary between manufacturing and service, suggesting as a possible explanation a strategic change in technology transfer decisions by multinational firms as the market structure evolves.

74 citations


Posted Content
TL;DR: The h-index is shown not to have this important property of consistency, and a taxonomy of bibliometric indicators of scientific performance is proposed.
Abstract: textWe propose a taxonomy of bibliometric indicators of scientific performance. The taxonomy relies on the property of consistency. The h-index is shown not to have this important property.

32 citations


Posted Content
TL;DR: In this article, the authors propose a competitive simulation test bed to stimulate research and development of electronic agents that help manage the power grid control infrastructure and strategies, where a few centralized control centers manage a limited number of large power plants such that their output meets the energy demands in real time.
Abstract: The energy sector will undergo fundamental changes over the next ten years. Prices for fossil energy resources are continuously increasing, there is an urgent need to reduce CO2 emissions, and the United States and European Union are strongly motivated to become more independent from foreign energy imports. These factors will lead to installation of large numbers of distributed renewable energy generators, which are often intermittent in nature. This trend conflicts with the current power grid control infrastructure and strategies, where a few centralized control centers manage a limited number of large power plants such that their output meets the energy demands in real time. As the proportion of distributed and intermittent generation capacity increases, this task becomes much harder, especially as the local and regional distribution grids where renewable energy generators are usually installed are currently virtually unmanaged, lack real time metering and are not built to cope with power flow inversions (yet). All this is about to change, and so the control strategies must be adapted accordingly. While the hierarchical command-and-control approach served well in a world with a few large scale generation facilities and many small consumers, a more flexible, decentralized, and self-organizing control infrastructure will have to be developed that can be actively managed to balance both the large grid as a whole, as well as the many lower voltage sub-grids. We propose a competitive simulation test bed to stimulate research and development of electronic agents that help manage these tasks. Participants in the competition will develop intelligent agents that are responsible to level energy supply from generators with energy demand from consumers. The competition is designed to closely model reality by bootstrapping the simulation environment with real historic load, generation, and weather data. The simulation environment will provide a low-risk platform that combines simulated markets and real-world data to develop solutions that can be applied to help building the self-organizing intelligent energy grid of the future.

21 citations


Posted Content
TL;DR: Surprisingly, but in line with practice, the results of the adapted travel time models show that random and class-based storage normally outperform full turnover-based dedicated storage.
Abstract: textIn the past thirty years the full turnover-based storage policy as described by Hausman et al. (1976, Management Science 22(6)) has been widely claimed to outperform the commonly used ABC class-based storage policy, in terms of the resulting average storage and retrieval machine travel time. In practice however, ABC storage is the dominant policy. Hausman et al. (1976) model the turnover-based policy under the unrealistic assumption of shared storage, i.e. the storage space allocated to one product can only accommodate its average inventory level; no specific space is reserved to store the maximum inventory of a product. It appears that many authors citing Hausman et al.’s results overlook this assumption and use the resulting storage and retrieval machine travel times as if it were valid for full turnover-based storage. Full turnover-based storage is a dedicated storage policy where the storage space allocated to one product must accommodate its maximum inventory level. This paper adapts the travel time model of Hausman et al. to accommodate full turnover-based dedicated storage. Surprisingly, the result of the adapted travel time model is opposite to that of Hausman et al. (1976) but, in line with practice, it supports that ABC (2- or 3-) class-based storage normally outperforms full turnover-based storage.

21 citations


Posted Content
TL;DR: In this article, the authors consider a make-to-stock production system with known exogenous replenishments and multiple customer classes, and the objective is to maximize profit over the planning horizon by deciding whether to accept or reject a given order, in anticipation of more profitable future orders.
Abstract: textIn this paper, we consider a make-to-stock production system with known exogenous replenishments and multiple customer classes. The objective is to maximize profit over the planning horizon by deciding whether to accept or reject a given order, in anticipation of more profitable future orders. What distinguishes this setup from classical airline revenue management problems is the explicit consideration of past and future replenishments and the integration of inventory holding and backlogging costs. If stock is on-hand, orders can be fulfilled immediately, backlogged or rejected. In shortage situations, orders can be either rejected or backlogged to be fulfilled from future arriving supply. The described decision problem occurs in many practical settings, notably in make-to-stock production systems, in which production planning is performed on a mid-term level, based on aggregated demand forecasts. In the short term, acceptance decisions about incoming orders are then made according to stock on-hand and scheduled production quantities. We model this problem as a stochastic dynamic program and characterize its optimal policy. It turns out that the optimal fulfillment policy has a relatively simple structure and is easy to implement. We evaluate this policy numerically and find that it systematically outperforms common current fulfillment policies, such as first-come-first-served and deterministic optimization.

17 citations


Posted Content
TL;DR: The potential economic value ofinstalled base data for spare parts logistics is highlighted and various data quality issues that are associated with the use of installed base data are discussed to show that planning performance depends on the quality dimensions.
Abstract: Many of the challenges in spare parts logistics emerge due to the combination of large service networks, and sporadic/slow-moving demand. Customer heterogeneity and stringent service deadlines entail further challenges. Meanwhile, high revenues rates in service operations motivate companies to invest and optimize the service logistics function. An important aspect of the spare parts logistics function is its ability to support customer-specific requirements with respect to service deadlines. To support customer specific operations, many companies are actively maintaining and utilizing installed base data during forecasting, planning and execution stages. In this paper, we highlight the potential economic value of installed base data for spare parts logistics. We also discuss various data quality issues that are associated with the use of installed base data and show that planning performance depends on the quality dimensions.

15 citations


Posted Content
TL;DR: In this paper, the authors analyzed how different spatial structures, in particular the monocentricity and polycentricity dimension, affect the economic performance of U.S. metropolitan areas.
Abstract: textRecent concepts as megaregions and polycentric urban regions emphasize that external economies are not confined to a single urban core, but shared among a collection of close-by and linked cities. However, empirical analyses of agglomeration and agglomeration externalities so-far neglects the multicentric spatial organization of agglomeration and the possibility of ‘sharing’ or ‘borrowing’ of size between cities. This paper takes up this empirical challenge by analyzing how different spatial structures, in particular the monocentricity – polycentricity dimension, affect the economic performance of U.S. metropolitan areas. OLS and 2SLS models explaining labor productivity show that spatial structure matters. Polycentricity is associated with higher labor productivity. This appears to justify suggestions that, compared to relatively monocentric metropolitan areas, agglomeration diseconomies remain relatively limited in the more polycentric metropolitan areas, while agglomeration externalities are indeed to some extent shared among the cities in such an area. However, it was also found that a network of geographically proximate smaller cities cannot provide a substitute for the urbanization externalities of a single large city.

Posted Content
TL;DR: In this paper, the authors examined the relationship between knowledge management (in terms of external acquisition and internal sharing) and innovation behavior and found that external acquisition activity enhances a firm's awareness of available knowledge opportunities.
Abstract: This study examines the relationship between knowledge management (KM) (in terms of external acquisition and internal sharing) and innovation behavior. The concept of absorptive capacity and assumptions from the dynamic capabilities view underlie the proposed framework and hypotheses. The framework is empirically tested using a random sample of 649 Dutch small to medium sized enterprises (SMEs). Our empricial results indicate that external acquisition practices play a key role in fostering SMEs’ innovativeness while internal sharing practices do not appear to have a significant influence. External acquisition activity enhances a firm’s awareness of available knowledge opportunities. Firms which actively acquire external knowledge (regardless of the type of knowledge) may build a greater competitive dynamic capability to sense and seize business opportunities which in turn may lead to new or improved products or processes. We suggest that owners/entrepreneurs of SMEs and their firms will benefit in the long term if they strategically manage knowledge, especially using external acquisition practices.

Posted Content
TL;DR: In this article, the authors provide a methodological synthesis of the theories enabling them to bring statistical evidence to the debate and find that blue ocean and competitive strategies overlap and managers do not face a discrete either/or decision between each strategy.
Abstract: textBlue ocean strategy seeks to turn strategic management on its head by replacing ‘competitive advantage’ with ‘value innovation’ as the primary goal where firms must create consumer demand and exploit untapped markets. Empirical analysis has been focused on case study evidence and so lacks generality to resolve the debate. We provide a methodological synthesis of the theories enabling us to bring statistical evidence to the debate. Our analysis finds that blue ocean and competitive strategies overlap and managers do not face a discrete either/or decision between each strategy. Our evidence for the Dutch retail industry indicates that blue ocean strategy has prevailed as a dominant long term viable strategy.

Posted Content
TL;DR: In this paper, the role of social interactions at the work floor for understanding gender pay differences in the EU was explored using data from the Fourth European Working Conditions Survey, and they found that sex similarity of subordinate and supervisor decreases the pay disadvantage for women in non-managerial occupations.
Abstract: textThis paper explores the role of social interactions at the work floor for understanding gender pay differences in the EU. Using data from the Fourth European Working Conditions Survey, we find that sex similarity of subordinate and supervisor decreases the pay disadvantage for women in non-managerial occupations, though working for a female boss is associated with a lower wage than working for a man. This may point at a ‘discrimination-for-pay’ effect. Female workers can avoid part of the discrimination against them by working for a woman and accepting lower pay. And when they face stronger discrimination in the situation of a male supervisor, they are ‘bribed’ by being offered a higher salary. Different results are obtained for managerial workers where sex similarity of worker and superior actually puts women at a further disadvantage. In addition to effects of vertical gender segregation, we examine whether wage formation is influenced by the proportion of women per sector (i.e., horizontal segregation), but find only weak support for the so-called social bias theory. Our main message is that while the traditional human capital model tends to study the wage formation process in isolation, gender pay differentials can also be seen as a social phenomenon, stemming from social interactions in labor markets.

Posted Content
TL;DR: In this paper, a simple bibliometric indicator that has similar properties as the H-index and does not suffer from the inconsistency problem is presented, and the authors argue that the use of this indicator is preferable over the use on the h-index.
Abstract: textThe h-index is a popular bibliometric performance indicator. We discuss a fundamental problem of the h-index. We refer to this problem as the problem of inconsistency. There turns out to be a very simple bibliometric indicator that has similar properties as the h-index and that does not suffer from the inconsistency problem. We argue that the use of this indicator is preferable over the use of the h-index.

Posted Content
TL;DR: The common and different development status of education policy in national standards strategies in twenty countries is explored and some similarities and dissimilarities in the policy are presented.
Abstract: textThis paper stems from a research project carried out for the Asia Pacific Economic Cooperation (APEC) to make an inventory of national standards education policies. Twenty countries - sixteen Asia-Pacific economies and four European nations – have been investigated. The paper relates similarities and differences between these policies to the standardization education activities in place. The paper concludes with policy recommendations.

Posted Content
TL;DR: In this article, the authors examined the prevalence of different knowledge management practices and organizational determinants of knowledge management among SMEs by conducting a quantitative study of empirical data from nearly 500 Dutch SMEs.
Abstract: In this study, we examine the prevalence of different KM practices and the organizational determinants of KM among SMEs by conducting a quantitative study of empirical data from nearly 500 Dutch SMEs. Our empirical results show that knowledge is managed in a people-based approach in SMEs. SMEs are most likely to acquire knowledge by staying in touch with professionals and experts outside the company and they incline to share knowledge and experience by talking to each other. Furthermore, KM is dependent on other organizational resources and processes. Organizational learning and competitive strategy with a formality approach are the positive determinants of KM while family orientation is a negative determinant of it. One of the challenges in the current study was to clearly distinguish, on an empirical basis, the previously defined concepts of knowledge management practices and organizational learning. Although in theory, they are distinct, the results of this study lead us to conclude that they may overlap in practice. In the conclusion, we recommend a learning-oriented knowledge management model for SMEs which combines aspects of the two literatures.

OtherDOI
TL;DR: In this paper, the authors discuss the key network concepts, such as, social capital, relational embeddedness (strong and weak ties), structural embeddings (i.e., structural holes), and the central role of knowledge in the discovery and realisation of innovations.
Abstract: textSocial networks matter in the innovation processes of young and small firms, since ‘innovation does not exist in a vacuum (Van De Ven, 1986: 601).’ The contacts a firm has could both generate advantages for further innovation and growth, and disadvantages leading to inertia and stagnation. In the first case the existing social network or the new business contact provides opportunities furthering eventual success, in the second case, the existing network or the new business contacts turns out to have a constraining or even detrimental effect on performance. The search and use of social capital is driven by goal-specificity: it only includes those ties that help the actor in the attainment of particular goals. Most of the research so far has been deliberately or unwillingly one-sided, by for instance only looking at entrepreneurial firms in dynamic industries (or more specifically, start-ups in the high-tech industries). Or selective attention has been paid to either the internal sources or the external contacts to trigger innovation. And when a conclusive study has been conducted into investigating both the effect of internal and external ties on innovation, the sample often includes large and established companies and managers (instead of entrepreneurs and smaller firms, as what we are interested in). The main line of reasoning in this paper is as follows. In the first section we discuss the key network concepts, such as, social capital, relational embeddedness (strong and weak ties), structural embeddedness (i.e. structural holes). Section two deals with innovation and the central role of knowledge in the discovery and realisation of innovations. Social networks and its potential for knowledge brokering appear to be important and therefore the last section focuses on the relationship between particular network characteristics and innovation.

Posted Content
TL;DR: There are two methodologies for theory-testing with cases, (a) testing in a single case (single case study) and (b) test in a sample of cases (theory-testing sample case study), and the functional form of the proposition that is tested determines which of these two methods should be used as discussed by the authors.
Abstract: textTheory-testing with cases is ascertaining whether the empirical evidence in a case or in a sample of cases either supports or does not support the theory. There are two methodologies for theory-testing with cases, (a) testing in a single case (‘theory-testing single case study’), and (b) testing in a sample of cases (‘theory-testing sample case study’). The functional form of the proposition that is tested determines which of these two methodologies should be used.

Book ChapterDOI
Abstract: Recent years have seen great revenue management successes, notably in the airline, hotel, and car rental businesses. Currently, an increasing number of industries, including manufacturers and retailers, are exploring ways to adopt similar concepts. Software companies are taking an active role in promoting the broadening range of applications. Additionally technological advances, including smart shelves and radio frequency identification (RFID), are removing many of the barriers to extended revenue management. The rapid developments in supply chain planning and revenue management software solutions, scientific models, and industry applications have created a complex picture, which is not yet well understood. It is not evident which scientific models fit which industry applications and which aspects are still missing. The relation between available software solutions and applications as well as scientific models appears equally unclear. The goal of this paper is to help overcome this confusion. To this end, we structure and review three dimensions, namely applications, models, and software. Subsequently, we relate these dimensions to each other and highlight commonalities and discrepancies. This comparison also provides a basis for identifying future research needs.

Journal Article
TL;DR: A business code of ethics is widely regarded as an important instrument to curb unethical behavior in the workplace, but little is empirically known about the factors that determine the impact of a code on unethical behavior.
Abstract: textA business code of ethics is widely regarded as an important instrument to curb unethical behavior in the workplace. However, little is empirically known about the factors that determine the impact of a code on unethical behavior. Besides the existence of a code, this study proposes five determining factors: the content of the code, the frequency of communication activities surrounding the code, the quality of the communication activities, and the embedment of the code in the organization by senior as well as local management. The full model explains 30.4% of unethical behavior while the explanatory value of a code alone is very modest.

Posted Content
TL;DR: This paper presents a temporal extension of the very expressive fragment SHIN(D) of the OWL-DL language resulting in the tOWL language, which implements a perdurantist view on individuals and allows for the representation of complex temporal aspects, such as process state transitions.
Abstract: The Web Ontology Language (OWL) is the most expressive standard language for modeling ontologies on the Semantic Web. In this paper, we present a temporal extension of the very expressive fragment SHIN(D) of the OWL-DL language resulting in the tOWL language. Through a layered approach we introduce 3 extensions: i) Concrete Domains, that allows the representation of restrictions using concrete domain binary predicates, ii) Temporal Representation, that introduces timepoints, relations between timepoints, intervals, and Allen’s 13 interval relations into the language, and iii) TimeSlices/Fluents, that implements a perdurantist view on individuals and allows for the representation of complex temporal aspects, such as process state transitions. We illustrate the expressiveness of the newly introduced language by providing a TBox representation of Leveraged Buy Out (LBO) processes in financial applications and an ABox representation of one specific LBO.

Posted Content
TL;DR: Two approaches to determine attribute weights in a dissimilarity measure based on product popularity are presented: a Poisson regression model and a novel boosting model minimizing Poisson deviance.
Abstract: textIn content- and knowledge-based recommender systems often a measure of (dis)similarity between products is used. Frequently, this measure is based on the attributes of the products. However, which attributes are important for the users of the system remains an important question to answer. In this paper, we present two approaches to determine attribute weights in a dissimilarity measure based on product popularity. We count how many times products are sold and based on this, we create two models to determine attribute weights: a Poisson regression model and a novel boosting model minimizing Poisson deviance. We evaluate these two models in two ways, namely using a clickstream analysis on four different product catalogs and a user experiment. The clickstream analysis shows that for each product catalog the standard equal weights model is outperformed by at least one of the weighting models. The user experiment shows that users seem to have a different notion of product similarity in an experimental context.

Posted Content
TL;DR: In this article, the authors evaluate four dissimilarity measures for product recommendation using an online survey, i.e., the Euclidean Distance, Hamming Distance, Heterogeneous EuclIDEan-Overlap Metric, and Adapted Gower Coefficient.
Abstract: textMany content-based recommendation approaches are based on a dissimilarity measure based on the product attributes. In this paper, we evaluate four dissimilarity measures for product recommendation using an online survey. In this survey, we asked users to specify which products they considered to be relevant recommendations given a reference product. We used microwave ovens as product category. Based on these responses, we create a relative relevance matrix we use to evaluate the dissimilarity measures with. Also, we use this matrix to estimate weights to be used in the dissimilarity measures. In this way, we evaluate four dissimilarity measures: the Euclidean Distance, the Hamming Distance, the Heterogeneous Euclidean-Overlap Metric, and the Adapted Gower Coefficient. The evaluation shows that these weights improve recommendation performance. Furthermore, the experiments indicate that when recommending a single product, the Heterogeneous Euclidean-Overlap Metric should be used and when recommending more than one product the Adapted Gower Coefficient is the best alternative. Finally, we compare these dissimilarity measures with a collaborative method and show that this method performs worse than the dissimilarity based approaches.

Posted Content
TL;DR: In this article, the authors focus on the electronics industry and analyze whether remanufacturing for such products substantially mitigates the energy used in the life cycle of these products, or whether as in most electrical equipments, it can only marginally contribute to such reduction.
Abstract: textRemanufacturing has long been perceived as an environmentally-friendly initiative. The question of how remanufacturing moderates the relation between environmental impact and economic returns is still unanswered, however. In this paper, we focus our attention on the electronics industry. In particular, we take a close look at remanufacturing within the mobile and personal computers industries. We analyze whether remanufacturing for such products substantially mitigates the energy used in the life-cycle of these products, or whether as in most electrical equipments, it can only marginally contribute to such reduction. Using both process-based and economic input-output data, we show that remanufacturing significantly reduces total energy consumption. Furthermore, we test the ubiquitous hypothesis that the market of remanufactured products is composed by products that have been downgraded and are therefore sold for prices below the average price of the new equipments. Using data from 9,900 real transactions obtained from eBay, we show that this assumption is true for personal computers, but not for mobiles. More importantly, despite the fact that remanufactured products may suffer downgrading, and that consumers therefore command a high discount for them, the economic output per energy unit used is still higher for remanufactured products. We thus conclude that remanufacturing for these two products is not only environmentally friendly, but also eco-efficient.

Posted Content
TL;DR: An existing model for rolling stock scheduling is extended to the specific requirements of the real-time case and applied in the rolling horizon framework, which proposes a rolling horizon approach to decompose the problem.
Abstract: textThis paper deals with real-time disruption management of rolling stock in passenger railway transportation. We present a generic framework for modeling disruptions in railway rolling stock schedules. The framework is presented as an online combinatorial decision problem where the uncertainty of a disruption is modeled by a sequence of information updates. To decompose the problem we propose a rolling horizon approach where only rolling stock decisions within a certain time horizon from the time of rescheduling are taken into account. The schedules are then revised as the situation progresses and more accurate information becomes available. We extend an existing model for rolling stock scheduling to the specific requirements of the real-time case and apply it in the rolling horizon framework. We perform computational tests on instances constructed from real life cases and explore the consequences of different settings of the approach for the trade-off between solution quality and computation time.

Posted Content
TL;DR: Wang et al. as mentioned in this paper proposed a working theory of innovative competence development in an emerging private sector, combining resource-based and institutional perspectives, arguing that Chinese private enterprises in Hangzhou were able to develop unique innovative competences to overcome resource constraints and manage technical and market risks while respecting the location and sector-specific constraints.
Abstract: textWhat kind of innovative competences are credibly developed by private entrepreneurs in China’s transition economy? On the basis of original empirical fieldwork in 45 software enterprises in Hangzhou, Zhejiang Province, we propose a working theory of innovative competence development in an emerging private sector. Combining resource-based and institutional perspectives we argue that Chinese private enterprises in Hangzhou were able to develop unique innovative competences to overcome resource constraints and manage technical - and market risks while respecting the location and sector-specific constraints. The findings suggest that private software enterprises in Hangzhou developed five innovative competences: organizational integration, financial commitment, external knowledge transformation, reputation development and strategic flexibility. The analysis further allows to propose three implications: 1) These five competences form a ‘configuration’ or coherent set of competences in this particular institutional setting; 2) Technological – and institutional regimes shape the potential range of innovative competences firms credibly develop depending on the available resources of the firm; 3) Innovative competences can be functional equivalents of institutions in the absence of well-developed, mature formal institutions.

Posted Content
TL;DR: In this paper, a Markov regime switching model with mixture copulas is proposed to explain changes in the strength and structure of the dependence of asset returns. But the model is limited to the case of equities.
Abstract: We develop a new model that can explain changes in the strength and structure of the dependence of asset returns. Dependence can become stronger (weaker), symmetric (asymmetric) or tail (in)dependent. We use a Markov regime switching model with mixture copulas. Changes in the strength of dependence are modeled as changes in the parameters of the copulas. Changes in symmetry and tail dependence are represented by adjustments of the mixture weights. We estimate our model for international equity markets and nd that investors, who ignore one type of dependence, make relative measurement errors in Value-at-Risk of up to 15%. Ignoring

Posted Content
TL;DR: Theory-building with cases as discussed by the authors is a form of generating new propositions in empirical evidence, where only those theoretical formulations are accepted as a result of the theory-building study that are confirmed in a test in the sample from which the proposition was built.
Abstract: textTheory-building with cases is (a) formulating new propositions that emerge from the empirical evidence in a sample of cases and (b) testing them in the same sample. The main difference with most other forms of generating new propositions (such as analyzing the theoretical literature, brainstorming, etc.) is its empirical character. The main difference with other forms of discovering new propositions in empirical evidence (such as in ‘exploratory’ research) is that only those theoretical formulations are accepted as a result of the theory-building study that are confirmed in a test in the sample from which the proposition was built. It is possible that a proposition about a relationship between two variables emerges from an exploratory single case study (e.g., when both variables have extreme values in that case), but it is not possible to test that new proposition in the same study because this would require a comparison in a sample of cases. The term theory-building study (as distinct from an exploratory study) is used here only for studies in which a proper test of the new proposition has been conducted.

Posted Content
TL;DR: A new method to manage these open locations such that the average system travel time for processing a block of storage and retrieval jobs in an automated warehousing system is minimized.
Abstract: textA warehouse needs to have sufficient open locations to be able to store incoming shipments of various sizes. In combination with ongoing load retrievals open locations gradually spread over the storage area. Unfavorable positions of open locations negatively impact the average load retrieval times. This paper presents a new method to manage these open locations such that the average system travel time for processing a block of storage and retrieval jobs in an automated warehousing system is minimized. We introduce the effective storage area (ESA), a well-defined part of the locations closest to the depot; where only a part of the open locations –the effective open locations-, together with all the products, are stored. We determine the optimal number of effective open locations and the ESA boundary minimizing the average travel time. Using the ESA policy, the travel time of a pair of storage and retrieval jobs can be reduced by more than 10% on average. Its performance depends hardly on the number or the sequence of retrievals. In fact, in case of only one retrieval, applying the policy leads already to beneficial results. Application is also easy; the ESA size can be changed dynamically during storage and retrieval operations.