scispace - formally typeset
Search or ask a question

Showing papers by "HEC Montréal published in 2020"


Journal ArticleDOI
18 Sep 2020
TL;DR: This Review provides a strong rationale for using AI-based assistive tools for drug repurposing medications for human disease, including during the COVID-19 pandemic.
Abstract: Drug repurposing or repositioning is a technique whereby existing drugs are used to treat emerging and challenging diseases, including COVID-19. Drug repurposing has become a promising approach because of the opportunity for reduced development timelines and overall costs. In the big data era, artificial intelligence (AI) and network medicine offer cutting-edge application of information science to defining disease, medicine, therapeutics, and identifying targets with the least error. In this Review, we introduce guidelines on how to use AI for accelerating drug repurposing or repositioning, for which AI approaches are not just formidable but are also necessary. We discuss how to use AI models in precision medicine, and as an example, how AI models can accelerate COVID-19 drug repurposing. Rapidly developing, powerful, and innovative AI and network medicine technologies can expedite therapeutic development. This Review provides a strong rationale for using AI-based assistive tools for drug repurposing medications for human disease, including during the COVID-19 pandemic.

328 citations


Journal ArticleDOI
TL;DR: This work focuses on routing problems with drones, mostly in the context of parcel delivery, and surveys and classify the existing works and provides perspectives for future research.
Abstract: The interest in using drones in various applications has grown significantly in recent years. The reasons are related to the continuous advances in technology, especially the advent of fast microprocessors, which support intelligent autonomous control of several systems. Photography, construction, and monitoring and surveillance are only some of the areas in which the use of drones is becoming common. Among these, last-mile delivery is one of the most promising areas. In this work we focus on routing problems with drones, mostly in the context of parcel delivery. We survey and classify the existing works and we provide perspectives for future research.

189 citations


Proceedings Article
30 Apr 2020
TL;DR: This paper found that exposure bias appears to be less of an issue than the complications arising from non-differentiable, sequential GAN training, and that MLE trained models provide a better quality/diversity trade-off compared to their GAN counterparts, all while being easier to train and less computationally expensive.
Abstract: Traditional natural language generation (NLG) models are trained using maximum likelihood estimation (MLE) which differs from the sample generation inference procedure. During training the ground truth tokens are passed to the model, however, during inference, the model instead reads its previously generated samples - a phenomenon coined exposure bias. Exposure bias was hypothesized to be a root cause of poor sample quality and thus many generative adversarial networks (GANs) were proposed as a remedy since they have identical training and inference. However, many of the ensuing GAN variants validated sample quality improvements but ignored loss of sample diversity. This work reiterates the fallacy of quality-only metrics and clearly demonstrate that the well-established technique of reducing softmax temperature can outperform GANs on a quality-only metric. Further, we establish a definitive quality-diversity evaluation procedure using temperature tuning over local and global sample metrics. Under this, we find that MLE models consistently outperform the proposed GAN variants over the whole quality-diversity space. Specifically, we find that 1) exposure bias appears to be less of an issue than the complications arising from non-differentiable, sequential GAN training; 2) MLE trained models provide a better quality/diversity trade-off compared to their GAN counterparts, all while being easier to train, easier to cross-validate, and less computationally expensive.

177 citations


Journal ArticleDOI
TL;DR: In this paper, the authors synthesize the growing BT, SFR, and PHC literatures by systematically reviewing 236 articles across 21 years using an integrative conceptual framework, and showcase how the mature field of SFR in concert with the younger but prolific BT and product-harm crisis fields can enrich one another while jointly advancing a broad and unified discipline of negative events in marketing.
Abstract: Research studies on brand transgression (BT), service failure and recovery (SFR), and product-harm crisis (PHC) appear to have a common focus, yet the three streams developed surprisingly independently and with limited reference to one another. This situation is unfortunate because all three fields study a similar phenomenon by using complementary conceptualizations, theories, and methods; we argue that this development in silos represents an unnecessary obstacle to the development of a common discipline. In response, this review synthesizes the growing BT, SFR, and PHC literatures by systematically reviewing 236 articles across 21 years using an integrative conceptual framework. In doing so, we showcase how the mature field of SFR in concert with the younger but prolific BT and PHC fields can enrich one another while jointly advancing a broad and unified discipline of negative events in marketing. Through this process, we provide and explicate seven overarching insights across three major themes (theory, dynamic aspects, and method) to encourage researchers to contribute to the interface between these three important fields. The review concludes with academic contributions and practical implications.

154 citations


Journal ArticleDOI
TL;DR: Vehicle routing problems have been the focus of extensive research over the past sixty years, driven by their economic importance and their theoretical interest as mentioned in this paper, and the diversity of applications has motivated the study of a myriad of problem variants with different attributes.

153 citations


Proceedings Article
30 Apr 2020
TL;DR: InfoGraph as mentioned in this paper learns graph-level representations by maximizing the mutual information between the graph level representation and the representations of substructures of different scales (e.g., nodes, edges, triangles).
Abstract: This paper studies learning the representations of whole graphs in both unsupervised and semi-supervised scenarios. Graph-level representations are critical in a variety of real-world applications such as predicting the properties of molecules and community analysis in social networks. Traditional graph kernel based methods are simple, yet effective for obtaining fixed-length representations for graphs but they suffer from poor generalization due to hand-crafted designs. There are also some recent methods based on language models (e.g. graph2vec) but they tend to only consider certain substructures (e.g. subtrees) as graph representatives. Inspired by recent progress of unsupervised representation learning, in this paper we proposed a novel method called InfoGraph for learning graph-level representations. We maximize the mutual information between the graph-level representation and the representations of substructures of different scales (e.g., nodes, edges, triangles). By doing so, the graph-level representations encode aspects of the data that are shared across different scales of substructures. Furthermore, we further propose InfoGraph*, an extension of InfoGraph for semisupervised scenarios. InfoGraph* maximizes the mutual information between unsupervised graph representations learned by InfoGraph and the representations learned by existing supervised methods. As a result, the supervised encoder learns from unlabeled data while preserving the latent semantic space favored by the current supervised task. Experimental results on the tasks of graph classification and molecular property prediction show that InfoGraph is superior to state-of-the-art baselines and InfoGraph* can achieve performance competitive with state-of-the-art semi-supervised models.

147 citations


Journal ArticleDOI
TL;DR: In this paper, a review of practitioner-oriented articles, interviews with key informants working for B2B organizations, and a webinar with sales professionals is presented to better understand and respond to the COVID-19 pandemic and other crises.

121 citations


Journal ArticleDOI
13 Feb 2020
TL;DR: A review of conceptual articles that develop process theoretical contributions published in two major journals is built on to propose a typology of four process theorizing styles that are labelled linear, parallel, recursive and conjunctive.
Abstract: In recent years, there have been many calls for scholars to innovate in their styles of conceptual work, and in particular to develop process theoretical contributions that consider the dynamic unf...

114 citations


Journal ArticleDOI
TL;DR: In this article, the authors explored how using a green product (e.g., a pair of headphones made from recycled materials) influenced the enjoyment of the accompanying consumption experience, referred to as the greenconsumption effect.
Abstract: In many situations, consumers use green products without a deliberate choice to use or purchase the product. This research explores how using a green product (e.g., a pair of headphones made from recycled materials) influences the enjoyment of the accompanying consumption experience (e.g., listening to music), even if consumers have not deliberately chosen or purchased the product. Five experiments in actual consumption settings revealed that using a green (vs. conventional) product enhances the enjoyment of the accompanying consumption experience, referred to as the greenconsumption effect. Merely using a green product makes consumers perceive an increase in the extent to which they are valued as individuals by society, which leads to warm glow feelings, and consequently enhances the enjoyment of the accompanying consumption experience. When consumers experience low social worth, the positive effect of using green products on the accompanying consumption experience is amplified. The greenconsumption effect disappears when the negative environmental impact of the green product attribute is low. From a managerial standpoint, the current research identifies instances where brands can benefit from going green and encourages marketers, especially service providers, to promote green products that are instrumental in consumption experiences.

98 citations


Journal ArticleDOI
TL;DR: Workload and innovative work behavior are widely studied research topics as discussed by the authors, however, the relationship between them is not well understood, and as a result, there is no consensus on whether workload is good or bad for employee innovation.
Abstract: Is workload good or bad for employee innovation? Workload and innovative work behavior are widely studied research topics. However, the relationship between them is not well understood. As a result...

92 citations


Proceedings ArticleDOI
22 Sep 2020
TL;DR: This work develops an algorithm that leverages classical recommendation models for causal recommendation and demonstrates that the proposed algorithm is more robust to unobserved confounders and improves recommendation.
Abstract: The task of recommender systems is classically framed as a prediction of users’ preferences and users’ ratings. However, its spirit is to answer a counterfactual question: “What would the rating be if we ‘forced’ the user to watch the movie?” This is a question about an intervention, that is a causal inference question. The key challenge of this causal inference is unobserved confounders, variables that affect both which items the users decide to interact with and how they rate them. To this end, we develop an algorithm that leverages classical recommendation models for causal recommendation. Across simulated and real datasets, we demonstrate that the proposed algorithm is more robust to unobserved confounders and improves recommendation.

Posted Content
TL;DR: Experimental results show that G2Gs significantly outperforms existing template-free approaches by up to 63% in terms of the top-1 accuracy and achieves a performance close to that of state-of-the-art template based approaches, but does not require domain knowledge and is much more scalable.
Abstract: A fundamental problem in computational chemistry is to find a set of reactants to synthesize a target molecule, a.k.a. retrosynthesis prediction. Existing state-of-the-art methods rely on matching the target molecule with a large set of reaction templates, which are very computationally expensive and also suffer from the problem of coverage. In this paper, we propose a novel template-free approach called G2Gs by transforming a target molecular graph into a set of reactant molecular graphs. G2Gs first splits the target molecular graph into a set of synthons by identifying the reaction centers, and then translates the synthons to the final reactant graphs via a variational graph translation framework. Experimental results show that G2Gs significantly outperforms existing template-free approaches by up to 63% in terms of the top-1 accuracy and achieves a performance close to that of state-of-the-art template based approaches, but does not require domain knowledge and is much more scalable.

Journal ArticleDOI
TL;DR: The use of Fitbit devices in interventions has the potential to promote healthy lifestyles in terms of physical activity and weight and subgroup analysis and fsQCA demonstrated that, in addition to the effects of theFitbit devices, setting activity goals was the most important intervention component.
Abstract: Background: Unhealthy behaviors, such as physical inactivity, sedentary lifestyle, and unhealthful eating, remain highly prevalent, posing formidable challenges in efforts to improve cardiovascular health. While traditional interventions to promote healthy lifestyles are both costly and effective, wearable trackers, especially Fitbit devices, can provide a low-cost alternative that may effectively help large numbers of individuals become more physically fit and thereby maintain a good health status. Objective: The objectives of this meta-analysis are (1) to assess the effectiveness of interventions that incorporate a Fitbit device for healthy lifestyle outcomes (eg, steps, moderate-to-vigorous physical activity, and weight) and (2) to identify which additional intervention components or study characteristics are the most effective at improving healthy lifestyle outcomes. Methods: A systematic review was conducted, searching the following databases from 2007 to 2019: MEDLINE, EMBASE, CINAHL, and CENTRAL (Cochrane). Studies were included if (1) they were randomized controlled trials, (2) the intervention involved the use of a Fitbit device, and (3) the reported outcomes were related to healthy lifestyles. The main outcome measures were related to physical activity, sedentary behavior, and weight. All the studies were assessed for risk of bias using Cochrane criteria. A random-effects meta-analysis was conducted to estimate the treatment effect of interventions that included a Fitbit device compared with a control group. We also conducted subgroup analysis and fuzzy-set qualitative comparative analysis (fsQCA) to further disentangle the effects of intervention components. Results: Our final sample comprised 41 articles reporting the results of 37 studies. For Fitbit-based interventions, we found a statistically significant increase in daily step count (mean difference [MD] 950.54, 95% CI 475.89-1425.18; P<.001) and moderate-to-vigorous physical activity (MD 6.16, 95% CI 2.80-9.51; P<.001), a significant decrease in weight (MD −1.48, 95% CI −2.81 to −0.14; P=.03), and a nonsignificant decrease in objectively assessed and self-reported sedentary behavior (MD −10.62, 95% CI −35.50 to 14.27; P=.40 and standardized MD −0.11, 95% CI −0.48 to 0.26; P=.56, respectively). In general, the included studies were at low risk for bias, except for performance bias. Subgroup analysis and fsQCA demonstrated that, in addition to the effects of the Fitbit devices, setting activity goals was the most important intervention component. Conclusions: The use of Fitbit devices in interventions has the potential to promote healthy lifestyles in terms of physical activity and weight. Fitbit devices may be useful to health professionals for patient monitoring and support. Trial Registration: PROSPERO International Prospective Register of Systematic Reviews CRD42019145450; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42019145450

Journal ArticleDOI
TL;DR: This paper aims to comprehensively review the existing work on the VRPSPD, surveys mathematical formulations, algorithms, variants, case studies, and industrial applications, and provides an overview of trends in the literature.

Journal ArticleDOI
TL;DR: This paper solves a multi-trip drone routing problem, where drones’ energy consumption is modeled as a nonlinear function of payload and travel distance and uses a 2-index formulation to model the problem and develops a branch-and-cut algorithm for the formulation.
Abstract: Drone delivery is known as a potential contributor in improving efficiency and alleviating last-mile delivery problems. For this reason, drone routing and scheduling has become a highly active area of research in recent years. Unlike the vehicle routing problem, however, designing drones’ routes is challenging due to multiple operational characteristics including multi-trip operations, recharge planning, and energy consumption calculation. To fill some important gaps in the literature, this paper solves a multi-trip drone routing problem, where drones’ energy consumption is modeled as a nonlinear function of payload and travel distance. We propose adding logical cuts and subgradient cuts in the solution process to tackle the more complex nonlinear (convex) energy function, instead of using the linear approximation method as in the literature, which can fail to detect infeasible routes due to excess energy consumption. We use a 2-index formulation to model the problem and develop a branch-and-cut algorithm for the formulation. Benchmark instances are first generated for this problem. Numerical tests indicate that even though the original model is nonlinear, the proposed approach can solve large problems to optimality. In addition, in multiple instances, the linear approximation model yields routes that under the nonlinear energy model would be energy infeasible. Use of a linear approximation for drone energy leads to differences in energy consumption of about 9% on average compared to the nonlinear energy model.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive sample of 101 effectuation articles published in JCR®-listed journals between 1998 and 2016 (inclusively), with the specific aim of uncovering the main conceptual and methodological articulations that have underpinned effectuation research to date.
Abstract: In spite of all the scholarly attention it has garnered, effectuation research continues to face a series of theoretical and methodological challenges. In order to help move effectuation research forward, we content-analyze a comprehensive sample of 101 effectuation articles published in JCR®-listed journals between 1998 and 2016 (inclusively), with the specific aim of uncovering the main conceptual and methodological articulations that have underpinned effectuation research to date. In doing so, we not only uncover some the field’s achievements and shortcomings but also examine the extent to which published effectuation research addresses its most salient criticisms. We build on these observations to propose three recommendations for future advances, namely (1) conceiving effectuation as a “mode of action”; (2) developing new methodological indicators centered on effectuation’s concrete manifestations; and (3) examining the underlying dynamics explaining effectuation’s antecedents and consequences.

Proceedings Article
12 Jul 2020
TL;DR: The proposed continuous graph neural networks are robust to over-smoothing and hence allow us to build deeper networks, which in turn are able to capture the long-range dependencies between nodes.
Abstract: This paper builds on the connection between graph neural networks and traditional dynamical systems. We propose continuous graph neural networks (CGNN), which generalise existing graph neural networks with discrete dynamics in that they can be viewed as a specific discretisation scheme. The key idea is how to characterise the continuous dynamics of node representations, i.e. the derivatives of node representations, w.r.t. time. Inspired by existing diffusion-based methods on graphs (e.g. PageRank and epidemic models on social networks), we define the derivatives as a combination of the current node representations, the representations of neighbors, and the initial values of the nodes. We propose and analyse two possible dynamics on graphs---including each dimension of node representations (a.k.a. the feature channel) change independently or interact with each other---both with theoretical justification. The proposed continuous graph neural networks are robust to over-smoothing and hence allow us to build deeper networks, which in turn are able to capture the long-range dependencies between nodes. Experimental results on the task of node classification demonstrate the effectiveness of our proposed approach over competitive baselines.

Journal ArticleDOI
TL;DR: Computational results show that the proposed heuristic is highly effective and can solve large-size instances within short computational times.

Journal ArticleDOI
TL;DR: A nonlinear mixed integer programming model is proposed to optimally determine fleet deployment along routes (including green technology adoption), sailing speeds on all legs, timetables, cargo allocation among routes for each origin-destination pair, and berth allocation considering the availability of shore power at different berths in order to minimize total five types of cost.
Abstract: The Emission Control Areas (ECAs) established by the International Maritime Organization are beneficial to reduce the sulphur emissions in maritime transportation but bring a significant increase in operating cost for shipping liners. Low sulphur emissions are required when ships berth or sail within ECAs. It is an irreversible trend that green technologies such as scrubbers and shore power will be implemented in maritime shipping industry. However, the literature lacks a quantitative decision methodology on green technology adoption for fleet deployment in a shipping network in the context of ECAs. Given a shipping network with multiple routes connected by transshipment hubs, this study proposes a nonlinear mixed integer programming model to optimally determine fleet deployment along routes (including green technology adoption), sailing speeds on all legs, timetables, cargo allocation among routes for each origin-destination pair, and berth allocation considering the availability of shore power at different berths in order to minimize total five types of cost. A three-phase heuristic is also developed to solve this problem. Numerical experiments with real-world data are conducted to validate the effectiveness of the proposed model and the efficiency of the three-phase heuristic. Some managerial implications are also outlined on the basis of the numerical experiments.

Journal ArticleDOI
TL;DR: Sentometrics as mentioned in this paper is a portmanteau of sentiment and econometrics, which is used to transform qualitative sentiment data into quantitative sentiment variables, and use those variables in an econometric analysis of the relationships between sentiment and other variables.
Abstract: The advent of massive amounts of textual, audio, and visual data has spurred the development of econometric methodology to transform qualitative sentiment data into quantitative sentiment variables, and to use those variables in an econometric analysis of the relationships between sentiment and other variables. We survey this emerging research field and refer to it as sentometrics, which is a portmanteau of sentiment and econometrics. We provide a synthesis of the relevant methodological approaches, illustrate with empirical results, and discuss useful software.

Journal ArticleDOI
TL;DR: This paper positions citizen science as a leading information quality research frontier and shows how citizen science opens a unique opportunity for the information systems community to contribute to a broad range of disciplines in natural and social sciences and humanities.
Abstract: The rapid proliferation of online content producing and sharing technologies resulted in an explosion of user-generated content (UGC), which now extends to scientific data. Citizen science, in which ordinary people contribute information for scientific research, epitomizes UGC. Citizen science projects are typically open to everyone, engage diverse audiences, and challenge ordinary people to produce data of highest quality to be usable in science. This also makes citizen science a very exciting area to study both traditional and innovative approaches to information quality management. With this paper we position citizen science as a leading information quality research frontier. We also show how citizen science opens a unique opportunity for the information systems community to contribute to a broad range of disciplines in natural and social sciences and humanities.

Posted Content
TL;DR: An EM-based algorithm for optimization of a probabilistic model called RNNLogic, which treats logic rules as a latent variable, and simultaneously trains a rule generator as well as a reasoning predictor with logic rules.
Abstract: This paper studies learning logic rules for reasoning on knowledge graphs. Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks, and hence are critical to learn. Existing methods either suffer from the problem of searching in a large search space (e.g., neural logic programming) or ineffective optimization due to sparse rewards (e.g., techniques based on reinforcement learning). To address these limitations, this paper proposes a probabilistic model called RNNLogic. RNNLogic treats logic rules as a latent variable, and simultaneously trains a rule generator as well as a reasoning predictor with logic rules. We develop an EM-based algorithm for optimization. In each iteration, the reasoning predictor is first updated to explore some generated logic rules for reasoning. Then in the E-step, we select a set of high-quality rules from all generated rules with both the rule generator and reasoning predictor via posterior inference; and in the M-step, the rule generator is updated with the rules selected in the E-step. Experiments on four datasets prove the effectiveness of RNNLogic.

Proceedings Article
01 Jan 2020
TL;DR: It is shown in an empirical study that ContinualMAML, an online extension of the popular MAML algorithm, is better suited to the new scenario than the aforementioned methodologies including standard continual learning and meta-learning approaches.
Abstract: Continual learning agents experience a stream of (related) tasks. The main challenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream. We are interested in the intersection of two recent continual-learning scenarios. In meta-continual learning, the model is pre-trained using meta-learning to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation. In their original formulations, both methods have limitations. We stand on their shoulders to propose a more general scenario, OSAKA, where an agent must quickly solve new (out-of-distribution) tasks, while also requiring fast remembering. We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario. We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario. We show in an empirical study that ContinualMAML is better suited to the new scenario than the aforementioned methodologies including standard continual learning and meta-learning approaches.

Journal ArticleDOI
TL;DR: As work and organizational realities become increasingly “post-bureaucratic,” the conventional and stable bases of a person's authority, such as their position, their expertise, or the acquiescence of a sub...
Abstract: As work and organizational realities become increasingly “post-bureaucratic,” the conventional and stable bases of a person’s authority—their position, their expertise, or the acquiescence of a sub...

Journal ArticleDOI
TL;DR: The results suggest that SMEs can attain high innovation performance through both sequential and simultaneous IT ambidexterity, thus providing a starting point for reconciling competing views of IT ambidirectionality.

Posted Content
TL;DR: An Inductive Graph Neural Network Kriging (IGNNK) model to recover data for unsampled sensors on a network/graph structure is developed and inductive GNNs can be trained using dynamic adjacency matrices and a trained model can be transferred to new graph structures.
Abstract: Time series forecasting and spatiotemporal kriging are the two most important tasks in spatiotemporal data analysis. Recent research on graph neural networks has made substantial progress in time series forecasting, while little attention has been paid to the kriging problem -- recovering signals for unsampled locations/sensors. Most existing scalable kriging methods (e.g., matrix/tensor completion) are transductive, and thus full retraining is required when we have a new sensor to interpolate. In this paper, we develop an Inductive Graph Neural Network Kriging (IGNNK) model to recover data for unsampled sensors on a network/graph structure. To generalize the effect of distance and reachability, we generate random subgraphs as samples and reconstruct the corresponding adjacency matrix for each sample. By reconstructing all signals on each sample subgraph, IGNNK can effectively learn the spatial message passing mechanism. Empirical results on several real-world spatiotemporal datasets demonstrate the effectiveness of our model. In addition, we also find that the learned model can be successfully transferred to the same type of kriging tasks on an unseen dataset. Our results show that: 1) GNN is an efficient and effective tool for spatial kriging; 2) inductive GNNs can be trained using dynamic adjacency matrices; 3) a trained model can be transferred to new graph structures and 4) IGNNK can be used to generate virtual sensors.

Posted Content
TL;DR: The proposed deep neural network modelling approach based on the N-BEATS neural architecture is very effective at solving MTLF problem and provides statistically significant relative gain in terms of the MAPE error metric.
Abstract: This paper addresses the mid-term electricity load forecasting problem. Solving this problem is necessary for power system operation and planning as well as for negotiating forward contracts in deregulated energy markets. We show that our proposed deep neural network modeling approach based on the deep neural architecture is effective at solving the mid-term electricity load forecasting problem. Proposed neural network has high expressive power to solve non-linear stochastic forecasting problems with time series including trends, seasonality and significant random fluctuations. At the same time, it is simple to implement and train, it does not require signal preprocessing, and it is equipped with a forecast bias reduction mechanism. We compare our approach against ten baseline methods, including classical statistical methods, machine learning and hybrid approaches, on 35 monthly electricity demand time series for European countries. The empirical study shows that proposed neural network clearly outperforms all competitors in terms of both accuracy and forecast bias. Code is available here: this https URL.

Journal ArticleDOI
TL;DR: The authors empirically test the prediction of Pastor, Stambaugh, and Taylor (2020) that green firms outperform brown firms when concerns about climate change increase unexpect- edly, using data for S&P 500 companies from January 2010 to June 2018.
Abstract: We empirically test the prediction of Pastor, Stambaugh, and Taylor (2020) that green firms outperform brown firms when concerns about climate change increase unexpect- edly, using data for S&P 500 companies from January 2010 to June 2018. To capture unexpected increases in climate change concerns, we construct a Media Climate Change Concerns index using news about climate change published by major U.S. newspapers. We find that when concerns about climate change increase unexpectedly, green firms’ stock prices increase, while brown firms’ decrease. Further, using topic modeling, we con- clude that climate change concerns affect returns both through investors updating their expectations about firms’ future cash flows and through changes in investors’ preferences for sustainability.

Journal ArticleDOI
TL;DR: In this article, the authors evaluate the importance of a country's fiscal capacity in explaining the relation between economic growth shocks and sovereign default risk for the COVID-19 pandemic.
Abstract: The COVID-19 pandemic provides a unique setting in which to evaluate the importance of a country's fiscal capacity in explaining the relation between economic growth shocks and sovereign default risk For a sample of 30 developed countries, we find a positive and significant sensitivity of sovereign default risk to the intensity of the virus' spread for fiscally constrained governments Supporting the fiscal channel, we confirm the results for Eurozone countries and US states, for which monetary policy can be held constant Our analysis suggests that financial markets penalize sovereigns with low fiscal space, thereby impairing their resilience to external shocks

Journal ArticleDOI
TL;DR: In this paper, the authors studied the relationship between FDI and technological advancement and whether recipients' absorptive capacity matters, and they found empirical evidence that the global FDI network has a core-periphery structure, and that core countries are more technologically developed than peripheral countries.