scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this paper, the authors explore the economic impact of the sharing economy on incumbent firms by studying the case of Airbnb, a prominent platform for short-term accommodations, and quantify its impact on the Texas hotel industry over the subsequent decade.
Abstract: Peer-to-peer markets, collectively known as the sharing economy, have emerged as alternative suppliers of goods and services traditionally provided by long-established industries. The authors explore the economic impact of the sharing economy on incumbent firms by studying the case of Airbnb, a prominent platform for short-term accommodations. They analyze Airbnb’s entry into the state of Texas and quantify its impact on the Texas hotel industry over the subsequent decade. In Austin, where Airbnb supply is highest, the causal impact on hotel revenue is in the 8%–10% range; moreover, the impact is nonuniform, with lower-priced hotels and hotels that do not cater to business travelers being the most affected. The impact manifests itself primarily through less aggressive hotel room pricing, benefiting all consumers, not just participants in the sharing economy. The price response is especially pronounced during periods of peak demand, such as during the South by Southwest festival, and is due to a di...

1,204 citations


ReportDOI
TL;DR: In this article, the authors show that the impact of regularization bias and overfitting on estimation of the parameter of interest θ0 can be removed by using two simple, yet critical, ingredients: (1) using Neyman-orthogonal moments/scores that have reduced sensitivity with respect to nuisance parameters, and (2) making use of cross-fitting, which provides an efficient form of data-splitting.
Abstract: Summary We revisit the classic semi-parametric problem of inference on a low-dimensional parameter θ0 in the presence of high-dimensional nuisance parameters η0. We depart from the classical setting by allowing for η0 to be so high-dimensional that the traditional assumptions (e.g. Donsker properties) that limit complexity of the parameter space for this object break down. To estimate η0, we consider the use of statistical or machine learning (ML) methods, which are particularly well suited to estimation in modern, very high-dimensional cases. ML methods perform well by employing regularization to reduce variance and trading off regularization bias with overfitting in practice. However, both regularization bias and overfitting in estimating η0 cause a heavy bias in estimators of θ0 that are obtained by naively plugging ML estimators of η0 into estimating equations for θ0. This bias results in the naive estimator failing to be N−1/2 consistent, where N is the sample size. We show that the impact of regularization bias and overfitting on estimation of the parameter of interest θ0 can be removed by using two simple, yet critical, ingredients: (1) using Neyman-orthogonal moments/scores that have reduced sensitivity with respect to nuisance parameters to estimate θ0; (2) making use of cross-fitting, which provides an efficient form of data-splitting. We call the resulting set of methods double or debiased ML (DML). We verify that DML delivers point estimators that concentrate in an N−1/2-neighbourhood of the true parameter values and are approximately unbiased and normally distributed, which allows construction of valid confidence statements. The generic statistical theory of DML is elementary and simultaneously relies on only weak theoretical requirements, which will admit the use of a broad array of modern ML methods for estimating the nuisance parameters, such as random forests, lasso, ridge, deep neural nets, boosted trees, and various hybrids and ensembles of these methods. We illustrate the general theory by applying it to provide theoretical properties of the following: DML applied to learn the main regression parameter in a partially linear regression model; DML applied to learn the coefficient on an endogenous variable in a partially linear instrumental variables model; DML applied to learn the average treatment effect and the average treatment effect on the treated under unconfoundedness; DML applied to learn the local average treatment effect in an instrumental variables setting. In addition to these theoretical applications, we also illustrate the use of DML in three empirical examples.

1,204 citations


Journal ArticleDOI
Zheng Zhao1, Weihai Chen1, Xingming Wu1, Peter C. Y. Chen, Jingmeng Liu1 
TL;DR: A novel traffic forecast model based on long short-term memory (LSTM) network is proposed, which considers temporal-spatial correlation in traffic system via a two-dimensional network which is composed of many memory units.
Abstract: Short-term traffic forecast is one of the essential issues in intelligent transportation system. Accurate forecast result enables commuters make appropriate travel modes, travel routes, and departure time, which is meaningful in traffic management. To promote the forecast accuracy, a feasible way is to develop a more effective approach for traffic data analysis. The availability of abundant traffic data and computation power emerge in recent years, which motivates us to improve the accuracy of short-term traffic forecast via deep learning approaches. A novel traffic forecast model based on long short-term memory (LSTM) network is proposed. Different from conventional forecast models, the proposed LSTM network considers temporal-spatial correlation in traffic system via a two-dimensional network which is composed of many memory units. A comparison with other representative forecast models validates that the proposed LSTM network can achieve a better performance.

1,204 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial samples, and propose a taxonomy of these methods.
Abstract: With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples . Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed.

1,203 citations


Proceedings Article
07 May 2015
TL;DR: The m-RNN model directly models the probability distribution of generating a word given previous words and an image, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.
Abstract: In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu/~junhua.mao/m-RNN.html .

1,203 citations


Journal ArticleDOI
TL;DR: Pembrolizumab monotherapy demonstrated promising activity and manageable safety in patients with advanced gastric or gastroesophageal junction cancer who had previously received at least 2 lines of treatment.
Abstract: Importance Therapeutic options are needed for patients with advanced gastric cancer whose disease has progressed after 2 or more lines of therapy. Objective To evaluate the safety and efficacy of pembrolizumab in a cohort of patients with previously treated gastric or gastroesophageal junction cancer. Design, Setting, and Participants In the phase 2, global, open-label, single-arm, multicohort KEYNOTE-059 study, 259 patients in 16 countries were enrolled in a cohort between March 2, 2015, and May 26, 2016. Median (range) follow-up was 5.8 (0.5-21.6) months. Intervention Patients received pembrolizumab, 200 mg, intravenously every 3 weeks until disease progression, investigator or patient decision to withdraw, or unacceptable toxic effects. Main Outcomes and Measures Primary end points were objective response rate and safety. Objective response rate was assessed by central radiologic review per Response Evaluation Criteria in Solid Tumors, version 1.1, in all patients and those with programmed cell death 1 ligand 1 (PD-L1)–positive tumors. Expression of PD-L1 was assessed by immunohistochemistry. Secondary end points included response duration. Results Of 259 patients enrolled, most were male (198 [76.4%]) and white (200 [77.2%]); median (range) age was 62 (24-89) years. Objective response rate was 11.6% (95% CI, 8.0%-16.1%; 30 of 259 patients), with complete response in 2.3% (95% CI, 0.9%-5.0%; 6 of 259 patients). Median (range) response duration was 8.4 (1.6+ to 17.3+) months (+ indicates that patients had no progressive disease at their last assessment). Objective response rate and median (range) response duration were 15.5% (95% CI, 10.1%-22.4%; 23 of 148 patients) and 16.3 (1.6+ to 17.3+) months and 6.4% (95% CI, 2.6%-12.8%; 7 of 109 patients) and 6.9 (2.4 to 7.0+) months in patients with PD-L1–positive and PD-L1–negative tumors, respectively. Forty-six patients (17.8%) experienced 1 or more grade 3 to 5 treatment-related adverse events. Two patients (0.8%) discontinued because of treatment-related adverse events, and 2 deaths were considered related to treatment. Conclusions and Relevance Pembrolizumab monotherapy demonstrated promising activity and manageable safety in patients with advanced gastric or gastroesophageal junction cancer who had previously received at least 2 lines of treatment. Durable responses were observed in patients with PD-L1–positive and PD-L1–negative tumors. Further study of pembrolizumab for this group of patients is warranted. Trial Registration clinicaltrials.gov Identifier: NCT02335411

1,203 citations


Journal ArticleDOI
TL;DR: Particle-in-cell (PIC) methods have a long history in the study of laser-plasma interactions as discussed by the authors, and they have been widely used in the literature.
Abstract: Particle-in-cell (PIC) methods have a long history in the study of laser-plasma interactions. Early electromagnetic codes used the Yee staggered grid for field variables combined with a leapfrog EM-field update and the Boris algorithm for particle pushing. The general properties of such schemes are well documented. Modern PIC codes tend to add to these high-order shape functions for particles, Poisson preserving field updates, collisions, ionisation, a hybrid scheme for solid density and high-field QED effects. In addition to these physics packages, the increase in computing power now allows simulations with real mass ratios, full 3D dynamics and multi-speckle interaction. This paper presents a review of the core algorithms used in current laser-plasma specific PIC codes. Also reported are estimates of self-heating rates, convergence of collisional routines and test of ionisation models which are not readily available elsewhere. Having reviewed the status of PIC algorithms we present a summary of recent applications of such codes in laser-plasma physics, concentrating on SRS, short-pulse laser-solid interactions, fast-electron transport, and QED effects.

1,203 citations


Journal ArticleDOI
TL;DR: Phytotaxa is currently contributing more than a quarter of the ca 2000 species that are described every year, showing that it has become a major contributor to the dissemination of new species discovery, but the rate of discovery is slowing down.
Abstract: We have counted the currently known, described and accepted number of plant species as ca 374,000, of which approximately 308,312 are vascular plants, with 295,383 flowering plants (angiosperms; monocots: 74,273; eudicots: 210,008). Global numbers of smaller plant groups are as follows: algae ca 44,000, liverworts ca 9,000, hornworts ca 225, mosses 12,700, lycopods 1,290, ferns 10,560 and gymnosperms 1,079. Phytotaxa is currently contributing more than a quarter of the ca 2000 species that are described every year, showing that it has become a major contributor to the dissemination of new species discovery. However, the rate of discovery is slowing down, due to reduction in financial and scientific support for fundamental natural history studies.

1,202 citations


Proceedings ArticleDOI
25 Apr 2019
TL;DR: A simplified network based on a query-independent formulation, which maintains the accuracy of NLNet but with significantly less computation is created, and this simplified design shares similar structure with Squeeze-Excitation Network (SENet), which generally outperforms both simplified NLNet and SENet on major benchmarks for various recognition tasks.
Abstract: The Non-Local Network (NLNet) presents a pioneering approach for capturing long-range dependencies, via aggregating query-specific global context to each query position. However, through a rigorous empirical analysis, we have found that the global contexts modeled by non-local network are almost the same for different query positions within an image. In this paper, we take advantage of this finding to create a simplified network based on a query-independent formulation, which maintains the accuracy of NLNet but with significantly less computation. We further observe that this simplified design shares similar structure with Squeeze-Excitation Network (SENet). Hence we unify them into a three-step general framework for global context modeling. Within the general framework, we design a better instantiation, called the global context (GC) block, which is lightweight and can effectively model the global context. The lightweight property allows us to apply it for multiple layers in a backbone network to construct a global context network (GCNet), which generally outperforms both simplified NLNet and SENet on major benchmarks for various recognition tasks.

1,202 citations


Posted Content
TL;DR: Multi-task learning (MTL) as mentioned in this paper is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks.
Abstract: Multi-Task Learning (MTL) is a learning paradigm in machine learning and its aim is to leverage useful information contained in multiple related tasks to help improve the generalization performance of all the tasks. In this paper, we give a survey for MTL from the perspective of algorithmic modeling, applications and theoretical analyses. For algorithmic modeling, we give a definition of MTL and then classify different MTL algorithms into five categories, including feature learning approach, low-rank approach, task clustering approach, task relation learning approach and decomposition approach as well as discussing the characteristics of each approach. In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement learning, multi-view learning and graphical models. When the number of tasks is large or the data dimensionality is high, we review online, parallel and distributed MTL models as well as dimensionality reduction and feature hashing to reveal their computational and storage advantages. Many real-world applications use MTL to boost their performance and we review representative works in this paper. Finally, we present theoretical analyses and discuss several future directions for MTL.

1,202 citations


Journal ArticleDOI
TL;DR: A crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials.
Abstract: The use of machine learning methods for accelerating the design of crystalline materials usually requires manually constructed feature vectors or complex transformation of atom coordinates to input the crystal structure, which either constrains the model to certain crystal types or makes it difficult to provide chemical insights. Here, we develop a crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials. Our method provides a highly accurate prediction of density functional theory calculated properties for eight different properties of crystals with various structure types and compositions after being trained with 10^{4} data points. Further, our framework is interpretable because one can extract the contributions from local chemical environments to global properties. Using an example of perovskites, we show how this information can be utilized to discover empirical rules for materials design.

Journal ArticleDOI
TL;DR: It is shown that generated medical images can be used for synthetic data augmentation, and improve the performance of CNN for medical image classification, and generalize to other medical classification applications and thus support radiologists’ efforts to improve diagnosis.

Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this article, a self-attention based sequential model (SASRec) is proposed, which uses an attention mechanism to identify which items are'relevant' from a user's action history, and use them to predict the next item.
Abstract: Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the 'context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are 'relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.

Journal ArticleDOI
TL;DR: Overall cancer mortality rates have declined since 2002 in Korea, while incidence has increased and survival has improved.
Abstract: Purpose The aim of this study was to report nationwide cancer statistics in Korea, including incidence, mortality, survival, and prevalence, and their trends.


Posted Content
Karol Gregor1, Ivo Danihelka1, Alex Graves1, Danilo Jimenez Rezende1, Daan Wierstra1 
TL;DR: The Deep Recurrent Attentive Writer neural network architecture for image generation substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.
Abstract: This paper introduces the Deep Recurrent Attentive Writer (DRAW) neural network architecture for image generation. DRAW networks combine a novel spatial attention mechanism that mimics the foveation of the human eye, with a sequential variational auto-encoding framework that allows for the iterative construction of complex images. The system substantially improves on the state of the art for generative models on MNIST, and, when trained on the Street View House Numbers dataset, it generates images that cannot be distinguished from real data with the naked eye.

Proceedings ArticleDOI
15 Jun 2019
TL;DR: This work proposes a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods.
Abstract: Designing accurate and efficient ConvNets for mobile devices is challenging because the design space is combinatorially large. Due to this, previous neural architecture search (NAS) methods are computationally expensive. ConvNet architecture optimality depends on factors such as input resolution and target devices. However, existing approaches are too resource demanding for case-by-case redesigns. Also, previous work focuses primarily on reducing FLOPs, but FLOP count does not always reflect actual latency. To address these, we propose a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. FBNets (Facebook-Berkeley-Nets), a family of models discovered by DNAS surpass state-of-the-art models both designed manually and generated automatically. FBNet-B achieves 74.1% top-1 accuracy on ImageNet with 295M FLOPs and 23.1 ms latency on a Samsung S8 phone, 2.4x smaller and 1.5x faster than MobileNetV2-1.3 with similar accuracy. Despite higher accuracy and lower latency than MnasNet, we estimate FBNet-B's search cost is 420x smaller than MnasNet's, at only 216 GPU-hours. Searched for different resolutions and channel sizes, FBNets achieve 1.5% to 6.4% higher accuracy than MobileNetV2. The smallest FBNet achieves 50.2% accuracy and 2.9 ms latency (345 frames per second) on a Samsung S8. Over a Samsung-optimized FBNet, the iPhone-X-optimized model achieves a 1.4x speedup on an iPhone X. FBNet models are open-sourced at https://github. com/facebookresearch/mobile-vision.

Journal ArticleDOI
TL;DR: In this paper, the cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg2 of griz imaging data from the first year of the Dark Energy Survey (DES Y1), were presented.
Abstract: We present cosmological results from a combined analysis of galaxy clustering and weak gravitational lensing, using 1321 deg2 of griz imaging data from the first year of the Dark Energy Survey (DES Y1). We combine three two-point functions: (i) the cosmic shear correlation function of 26 million source galaxies in four redshift bins, (ii) the galaxy angular autocorrelation function of 650,000 luminous red galaxies in five redshift bins, and (iii) the galaxy-shear cross-correlation of luminous red galaxy positions and source galaxy shears. To demonstrate the robustness of these results, we use independent pairs of galaxy shape, photometric-redshift estimation and validation, and likelihood analysis pipelines. To prevent confirmation bias, the bulk of the analysis was carried out while "blind" to the true results; we describe an extensive suite of systematics checks performed and passed during this blinded phase. The data are modeled in flat ΛCDM and wCDM cosmologies, marginalizing over 20 nuisance parameters, varying 6 (for ΛCDM) or 7 (for wCDM) cosmological parameters including the neutrino mass density and including the 457×457 element analytic covariance matrix. We find consistent cosmological results from these three two-point functions and from their combination obtain S8≡σ8(Ωm/0.3)0.5=0.773-0.020+0.026 and Ωm=0.267-0.017+0.030 for ΛCDM; for wCDM, we find S8=0.782-0.024+0.036, Ωm=0.284-0.030+0.033, and w=-0.82-0.20+0.21 at 68% C.L. The precision of these DES Y1 constraints rivals that from the Planck cosmic microwave background measurements, allowing a comparison of structure in the very early and late Universe on equal terms. Although the DES Y1 best-fit values for S8 and Ωm are lower than the central values from Planck for both ΛCDM and wCDM, the Bayes factor indicates that the DES Y1 and Planck data sets are consistent with each other in the context of ΛCDM. Combining DES Y1 with Planck, baryonic acoustic oscillation measurements from SDSS, 6dF, and BOSS and type Ia supernovae from the Joint Lightcurve Analysis data set, we derive very tight constraints on cosmological parameters: S8=0.802±0.012 and Ωm=0.298±0.007 in ΛCDM and w=-1.00-0.04+0.05 in wCDM. Upcoming Dark Energy Survey analyses will provide more stringent tests of the ΛCDM model and extensions such as a time-varying equation of state of dark energy or modified gravity.

Journal ArticleDOI
TL;DR: In this article, the authors present a comprehensive overview of academic research on the relationship between environmental, social, and governance (ESG) criteria and corporate financial performance (CFP) and show that the business case for ESG investing is empirically very well founded.
Abstract: The search for a relation between environmental, social, and governance (ESG) criteria and corporate financial performance (CFP) can be traced back to the beginning of the 1970s. Scholars and investors have published more than 2000 empirical studies and several review studies on this relation since then. The largest previous review study analyzes just a fraction of existing primary studies, making findings difficult to generalize. Thus, knowledge on the financial effects of ESG criteria remains fragmented. To overcome this shortcoming, this study extracts all provided primary and secondary data of previous academic review studies. Through doing this, the study combines the findings of about 2200 individual studies. Hence, this study is by far the most exhaustive overview of academic research on this topic and allows for generalizable statements. The results show that the business case for ESG investing is empirically very well founded. Roughly 90% of studies find a nonnegative ESG–CFP relation. More impor...

Journal ArticleDOI
TL;DR: SDSS-IV as mentioned in this paper is a project encompassing three major spectroscopic programs: the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA), the Extended Baryon Oscillation Spectroscopic Survey (eBOSS), and the Time Domain Spectroscopy Survey (TDSS).
Abstract: We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (median $z\sim 0.03$). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between $z\sim 0.6$ and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.

Journal ArticleDOI
TL;DR: It is demonstrated that selective clearance of SCs by a pharmacological agent is beneficial in part through its rejuvenation of aged tissue stem cells, demonstrating that senolytic drugs may represent a new class of radiation mitigators and anti-aging agents.
Abstract: Senescent cells (SCs) accumulate with age and after genotoxic stress, such as total-body irradiation (TBI). Clearance of SCs in a progeroid mouse model using a transgenic approach delays several age-associated disorders, suggesting that SCs play a causative role in certain age-related pathologies. Thus, a 'senolytic' pharmacological agent that can selectively kill SCs holds promise for rejuvenating tissue stem cells and extending health span. To test this idea, we screened a collection of compounds and identified ABT263 (a specific inhibitor of the anti-apoptotic proteins BCL-2 and BCL-xL) as a potent senolytic drug. We show that ABT263 selectively kills SCs in culture in a cell type- and species-independent manner by inducing apoptosis. Oral administration of ABT263 to either sublethally irradiated or normally aged mice effectively depleted SCs, including senescent bone marrow hematopoietic stem cells (HSCs) and senescent muscle stem cells (MuSCs). Notably, this depletion mitigated TBI-induced premature aging of the hematopoietic system and rejuvenated the aged HSCs and MuSCs in normally aged mice. Our results demonstrate that selective clearance of SCs by a pharmacological agent is beneficial in part through its rejuvenation of aged tissue stem cells. Thus, senolytic drugs may represent a new class of radiation mitigators and anti-aging agents.


Journal ArticleDOI
TL;DR: It is suggested that the plant can modulate its microbiota to dynamically adjust to its environment and to better understand the level of plant dependence on the microbiotic components, the core microbiota need to be determined at different hierarchical scales of ecology while pan-microbiome analyses would improve characterization of the functions displayed.
Abstract: Plants can no longer be considered as standalone entities and a more holistic perception is needed. Indeed, plants harbor a wide diversity of microorganisms both inside and outside their tissues, in the endosphere and ectosphere, respectively. These microorganisms, which mostly belong to Bacteria and Fungi, are involved in major functions such as plant nutrition and plant resistance to biotic and abiotic stresses. Hence, the microbiota impact plant growth and survival, two key components of fitness. Plant fitness is therefore a consequence of the plant per se and its microbiota, which collectively form a holobiont. Complementary to the reductionist perception of evolutionary pressures acting on plant or symbiotic compartments, the plant holobiont concept requires a novel perception of evolution. The interlinkages between the plant holobiont components are explored here in the light of current ecological and evolutionary theories. Microbiome complexity and the rules of microbiotic community assemblage are not yet fully understood. It is suggested that the plant can modulate its microbiota to dynamically adjust to its environment. To better understand the level of plant dependence on the microbiotic components, the core microbiota need to be determined at different hierarchical scales of ecology while pan-microbiome analyses would improve characterization of the functions displayed.

Journal ArticleDOI
TL;DR: In a population-based study in Iceland, children under 10 years of age and females had a lower incidence of SARS-CoV-2 infection than adolescents or adults and males and the proportion of infected participants identified through population screening remained stable for the 20-day duration of screening.
Abstract: Background During the current worldwide pandemic, coronavirus disease 2019 (Covid-19) was first diagnosed in Iceland at the end of February. However, data are limited on how SARS-CoV-2, th...

Proceedings ArticleDOI
19 Apr 2019
TL;DR: CenterNet as discussed by the authors detects each object as a triplet, rather than a pair, of keypoints, which improves both precision and recall by enriching information collected by both the top-left and bottom-right corners and providing more recognizable information from the central regions.
Abstract: In object detection, keypoint-based approaches often experience the drawback of a large number of incorrect object bounding boxes, arguably due to the lack of an additional assessment inside cropped regions. This paper presents an efficient solution that explores the visual patterns within individual cropped regions with minimal costs. We build our framework upon a representative one-stage keypoint-based detector named CornerNet. Our approach, named CenterNet, detects each object as a triplet, rather than a pair, of keypoints, which improves both precision and recall. Accordingly, we design two customized modules, cascade corner pooling, and center pooling, that enrich information collected by both the top-left and bottom-right corners and provide more recognizable information from the central regions. On the MS-COCO dataset, CenterNet achieves an AP of 47.0 %, outperforming all existing one-stage detectors by at least 4.9%. Furthermore, with a faster inference speed than the top-ranked two-stage detectors, CenterNet demonstrates a comparable performance to these detectors. Code is available at https://github.com/Duankaiwen/CenterNet.

Journal ArticleDOI
TL;DR: Actinobacteria are Gram-positive bacteria with high G+C DNA content that constitute one of the largest bacterial phyla, and they are ubiquitously distributed in both aquatic and terrestrial ecosystems.
Abstract: Actinobacteria are Gram-positive bacteria with high G+C DNA content that constitute one of the largest bacterial phyla, and they are ubiquitously distributed in both aquatic and terrestrial ecosystems. Many Actinobacteria have a mycelial lifestyle and undergo complex morphological differentiation. They also have an extensive secondary metabolism and produce about two-thirds of all naturally derived antibiotics in current clinical use, as well as many anticancer, anthelmintic, and antifungal compounds. Consequently, these bacteria are of major importance for biotechnology, medicine, and agriculture. Actinobacteria play diverse roles in their associations with various higher organisms, since their members have adopted different lifestyles, and the phylum includes pathogens (notably, species of Corynebacterium, Mycobacterium, Nocardia, Propionibacterium, and Tropheryma), soil inhabitants (e.g., Micromonospora and Streptomyces species), plant commensals (e.g., Frankia spp.), and gastrointestinal commensals (Bifidobacterium spp.). Actinobacteria also play an important role as symbionts and as pathogens in plant-associated microbial communities. This review presents an update on the biology of this important bacterial phylum.

Journal ArticleDOI
TL;DR: This work deploys LSTM networks for predicting out-of-sample directional movements for the constituent stocks of the S&P 500 from 1992 until 2015 and finds one common pattern among the stocks selected for trading – they exhibit high volatility and a short-term reversal return profile.

Journal ArticleDOI
TL;DR: Among patients with relapsing multiple sclerosis, ocrelizumab was associated with lower rates of disease activity and progression than interferon beta‐1a over a period of 96 weeks.
Abstract: BackgroundB cells influence the pathogenesis of multiple sclerosis. Ocrelizumab is a humanized monoclonal antibody that selectively depletes CD20+ B cells. MethodsIn two identical phase 3 trials, we randomly assigned 821 and 835 patients with relapsing multiple sclerosis to receive intravenous ocrelizumab at a dose of 600 mg every 24 weeks or subcutaneous interferon beta-1a at a dose of 44 μg three times weekly for 96 weeks. The primary end point was the annualized relapse rate. ResultsThe annualized relapse rate was lower with ocrelizumab than with interferon beta-1a in trial 1 (0.16 vs. 0.29; 46% lower rate with ocrelizumab; P<0.001) and in trial 2 (0.16 vs. 0.29; 47% lower rate; P<0.001). In prespecified pooled analyses, the percentage of patients with disability progression confirmed at 12 weeks was significantly lower with ocrelizumab than with interferon beta-1a (9.1% vs. 13.6%; hazard ratio, 0.60; 95% confidence interval [CI], 0.45 to 0.81; P<0.001), as was the percentage of patients with disabilit...

Journal ArticleDOI
07 Jan 2019-BMJ
TL;DR: The government's £12bn investment in mental health services is to be rolled out in England over the next five years, with a target of £12.5bn by 2020.
Abstract: Rightly ambitious, but can the NHS deliver?