scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: Clinical observations and biology underlying tumours with combined SCLC and NSCLC histology and cancers that transform from adenocarcinoma to S CLC are summarized.
Abstract: Lung cancer is the most common cause of cancer deaths worldwide. The two broad histological subtypes of lung cancer are small-cell lung cancer (SCLC), which is the cause of 15% of cases, and non-small-cell lung cancer (NSCLC), which accounts for 85% of cases and includes adenocarcinoma, squamous-cell carcinoma, and large-cell carcinoma. Although NSCLC and SCLC are commonly thought to be different diseases owing to their distinct biology and genomic abnormalities, the idea that these malignant disorders might share common cells of origin has been gaining support. This idea has been supported by the unexpected findings that a subset of NSCLCs with mutated EGFR return as SCLC when resistance to EGFR tyrosine kinase inhibitors develops. Additionally, other case reports have described the coexistence of NSCLC and SCLC, further challenging the commonly accepted view of their distinct lineages. Here, we summarise the published clinical observations and biology underlying tumours with combined SCLC and NSCLC histology and cancers that transform from adenocarcinoma to SCLC. We also discuss pre-clinical studies pointing to common potential cells of origin, and speculate how the distinct paths of differentiation are determined by the genomics of each disease.

619 citations


Journal ArticleDOI
24 Mar 2015-eLife
TL;DR: It is shown that HLOs are remarkably similar to human fetal lung based on global transcriptional profiles, suggesting that HL Os are an excellent model to study human lung development, maturation and disease.
Abstract: Cell behavior has traditionally been studied in the lab in two-dimensional situations, where cells are grown in thin layers on cell-culture dishes. However, most cells in the body exist in a three-dimensional environment as part of complex tissues and organs, and so researchers have been attempting to re-create these environments in the lab. To date, several such ‘organoids’ have been successfully generated, including models of the human intestine, stomach, brain and liver. These organoids can mimic the responses of real tissues and can be used to investigate how organs form, change with disease, and how they might respond to potential therapies. Here, Dye et al. developed a new three-dimensional model of the human lung by coaxing human stem cells to become specific types of cells that then formed complex tissues in a petri dish. To make these lung organoids, Dye et al. manipulated several of the signaling pathways that control the formation of organs during the development of animal embryos. First, the stem cells were instructed to form a type of tissue called endoderm, which is found in early embryos and gives rise to the lung, liver and other several other internal organs. Then, Dye et al. activated two important developmental pathways that are known to make endoderm form three-dimensional intestinal tissue. However, by inhibiting two other key developmental pathways at the same time, the endoderm became tissue that resembles the early lung found in embryos instead. This early lung-like tissue formed three-dimensional spherical structures as it developed. The next challenge was to make these structures develop into lung tissue. Dye et al. worked out a method to do this, which involved exposing the cells to additional proteins that are involved in lung development. The resulting lung organoids survived in laboratory cultures for over 100 days and developed into well-organized structures that contain many of the types of cells found in the lung. Further analysis revealed the gene activity in the lung organoids resembles that of the lung of a developing human fetus, suggesting that lung organoids grown in the dish are not fully mature. Dye et al.'s findings provide a new approach for creating human lung organoids in culture that may open up new avenues for investigating lung development and diseases.

619 citations



Posted Content
TL;DR: A framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently, in Deep Q-Networks, a reinforcement learning algorithm that achieved human-level performance across many Atari games.
Abstract: Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.

619 citations


Journal ArticleDOI
TL;DR: The diseases in which oxidative stress is one of the triggers and the plant-derived antioxidant compounds with their mechanisms of antioxidant defenses that can help in the prevention of these diseases are discussed.
Abstract: Oxidative stress plays an essential role in the pathogenesis of chronic diseases such as cardiovascular diseases, diabetes, neurodegenerative diseases, and cancer. Long term exposure to increased levels of pro-oxidant factors can cause structural defects at a mitochondrial DNA level, as well as functional alteration of several enzymes and cellular structures leading to aberrations in gene expression. The modern lifestyle associated with processed food, exposure to a wide range of chemicals and lack of exercise plays an important role in oxidative stress induction. However, the use of medicinal plants with antioxidant properties has been exploited for their ability to treat or prevent several human pathologies in which oxidative stress seems to be one of the causes. In this review we discuss the diseases in which oxidative stress is one of the triggers and the plant-derived antioxidant compounds with their mechanisms of antioxidant defenses that can help in the prevention of these diseases. Finally, both the beneficial and detrimental effects of antioxidant molecules that are used to reduce oxidative stress in several human conditions are discussed.

619 citations


Posted Content
TL;DR: In this paper, the authors show that even a person with an intense blush will be pegged as dishonest, provided that even the smallest fraction of dishonest persons also show an intensity of blush.
Abstract: In my model of the evolution of honesty,1 I assumed the existence of a signal-a blush, perhaps-extreme values of which served to identify some individuals as being honest with certainty. Joseph Harrington notes that without this assumption, honest individuals have difficulty invading a population initially dominated by defectors. For readers who do not wish to work through the algebra in his comment, the argument is easily summarized in nontechnical terms. Suppose two honest mutants, A and B, arrive in an uncountably large population consisting entirely of dishonest persons. And suppose that the probability that an honest person exhibits an intense blush is, say, 0.999, while the corresponding probability for everyone else is only 0.001. When A sees an intense blush on the face of B, what will then be his estimate of the probability that B is honest? Assuming that A knows the laws of elementary probability and corrects for the base rate of honest persons in the population, it will be zero. When virtually everyone in the population is dishonest, even a person with an intense blush will be pegged as dishonest, provided that even the smallest fraction of dishonest persons also shows an intense blush. With

619 citations


Journal ArticleDOI
TL;DR: In this paper, a catalytic lignocellulose biorefinery process is presented, valorizing both polysaccharide and lignin components into a handful of chemicals.
Abstract: A catalytic lignocellulose biorefinery process is presented, valorizing both polysaccharide and lignin components into a handful of chemicals. To that end, birch sawdust is efficiently delignified through simultaneous solvolysis and catalytic hydrogenolysis in the presence of a Ru on carbon catalyst (Ru/C) in methanol under a H2 atmosphere at elevated temperature, resulting in a carbohydrate pulp and a lignin oil. The lignin oil yields above 50% of phenolic monomers (mainly 4-n-propylguaiacol and 4-n-propylsyringol) and about 20% of a set of phenolic dimers, relative to the original lignin content, next to phenolic oligomers. The structural features of the lignin monomers, dimers and oligomers were identified by a combination of GC/MS, GPC and 2D HSQC NMR techniques, showing interesting functionalities for forthcoming polymer applications. The effect of several key parameters like temperature, reaction time, wood particle size, reactor loading, catalyst reusability and the influence of solvent and gas were examined in view of the phenolic product yield, the degree of delignification and the sugar retention as a first assessment of the techno-economic feasibility of this biorefinery process. The separated carbohydrate pulp contains up to 92% of the initial polysaccharides, with a nearly quantitative retention of cellulose. Pulp valorization was demonstrated by its chemocatalytic conversion to sugar polyols, showing the multiple use of Ru/C, initially applied in the hydrogenolysis process. Various lignocellulosic substrates, including genetically modified lines of Arabidopsis thaliana, were finally processed in the hydrogenolytic biorefinery, indicating lignocellulose rich in syringyl-type lignin, as found in hardwoods, as the ideal feedstock for the production of chemicals.

619 citations


Book ChapterDOI
08 Sep 2018
TL;DR: This paper argues that, given the limited model capacity and the unlimited new information to be learned, knowledge has to be preserved or erased selectively and proposes a novel approach for lifelong learning, coined Memory Aware Synapses (MAS), which computes the importance of the parameters of a neural network in an unsupervised and online manner.
Abstract: Humans can learn in a continuous manner. Old rarely utilized knowledge can be overwritten by new incoming information while important, frequently used knowledge is prevented from being erased. In artificial learning systems, lifelong learning so far has focused mainly on accumulating knowledge over tasks and overcoming catastrophic forgetting. In this paper, we argue that, given the limited model capacity and the unlimited new information to be learned, knowledge has to be preserved or erased selectively. Inspired by neuroplasticity, we propose a novel approach for lifelong learning, coined Memory Aware Synapses (MAS). It computes the importance of the parameters of a neural network in an unsupervised and online manner. Given a new sample which is fed to the network, MAS accumulates an importance measure for each parameter of the network, based on how sensitive the predicted output function is to a change in this parameter. When learning a new task, changes to important parameters can then be penalized, effectively preventing important knowledge related to previous tasks from being overwritten. Further, we show an interesting connection between a local version of our method and Hebb’s rule, which is a model for the learning process in the brain. We test our method on a sequence of object recognition tasks and on the challenging problem of learning an embedding for predicting triplets. We show state-of-the-art performance and, for the first time, the ability to adapt the importance of the parameters based on unlabeled data towards what the network needs (not) to forget, which may vary depending on test conditions.

619 citations


Journal ArticleDOI
TL;DR: Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.
Abstract: Instrumental variable analysis is an approach for obtaining causal inferences on the effect of an exposure (risk factor) on an outcome from observational data. It has gained in popularity over the past decade with the use of genetic variants as instrumental variables, known as Mendelian randomization. An instrumental variable is associated with the exposure, but not associated with any confounder of the exposure-outcome association, nor is there any causal pathway from the instrumental variable to the outcome other than via the exposure. Under the assumption that a single instrumental variable or a set of instrumental variables for the exposure is available, the causal effect of the exposure on the outcome can be estimated. There are several methods available for instrumental variable estimation; we consider the ratio method, two-stage methods, likelihood-based methods, and semi-parametric methods. Techniques for obtaining statistical inferences and confidence intervals are presented. The statistical properties of estimates from these methods are compared, and practical advice is given about choosing a suitable analysis method. In particular, bias and coverage properties of estimators are considered, especially with weak instruments. Settings particularly relevant to Mendelian randomization are prioritized in the paper, notably the scenario of a continuous exposure and a continuous or binary outcome.

619 citations


Journal ArticleDOI
TL;DR: Folke et al. as mentioned in this paper proposed a social-ecological resilience and biosphere-based sustainability science model for sustainable living in ecology and society, 21(3):41, doi:10.5751/ES-08748-210341.
Abstract: CITATION: Folke, C., et al. 2016. Social-ecological resilience and biosphere-based sustainability science. Ecology and Society, 21(3):41, doi:10.5751/ES-08748-210341.

619 citations


Journal ArticleDOI
TL;DR: Although currently overshadowed by MS in terms of numbers of compounds resolved, NMR spectroscopy offers advantages both on its own and coupled with MS, and is adept at tracing metabolic pathways and fluxes using isotope labels.

Journal ArticleDOI
TL;DR: In this paper, the authors propose sparse ternary compression (STC), a new compression framework that is specifically designed to meet the requirements of the federated learning environment, which extends the existing compression technique of top- $k$ gradient sparsification with a novel mechanism to enable downstream compression as well as ternarization and optimal Golomb encoding of the weight updates.
Abstract: Federated learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server. This form of privacy-preserving collaborative learning, however, comes at the cost of a significant communication overhead during training. To address this problem, several compression methods have been proposed in the distributed training literature that can reduce the amount of required communication by up to three orders of magnitude. These existing methods, however, are only of limited utility in the federated learning setting, as they either only compress the upstream communication from the clients to the server (leaving the downstream communication uncompressed) or only perform well under idealized conditions, such as i.i.d. distribution of the client data, which typically cannot be found in federated learning. In this article, we propose sparse ternary compression (STC), a new compression framework that is specifically designed to meet the requirements of the federated learning environment. STC extends the existing compression technique of top- $k$ gradient sparsification with a novel mechanism to enable downstream compression as well as ternarization and optimal Golomb encoding of the weight updates. Our experiments on four different learning tasks demonstrate that STC distinctively outperforms federated averaging in common federated learning scenarios. These results advocate for a paradigm shift in federated optimization toward high-frequency low-bitwidth communication, in particular in the bandwidth-constrained learning environments.

Journal ArticleDOI
TL;DR: iDEP helps unveil the multifaceted functions of p53 and the possible involvement of several microRNAs such as miR-92a, miR/Bioconductor packages, 2 web services, and comprehensive annotation and pathway databases for 220 plant and animal species.
Abstract: RNA-seq is widely used for transcriptomic profiling, but the bioinformatics analysis of resultant data can be time-consuming and challenging, especially for biologists. We aim to streamline the bioinformatic analyses of gene-level data by developing a user-friendly, interactive web application for exploratory data analysis, differential expression, and pathway analysis. iDEP (integrated Differential Expression and Pathway analysis) seamlessly connects 63 R/Bioconductor packages, 2 web services, and comprehensive annotation and pathway databases for 220 plant and animal species. The workflow can be reproduced by downloading customized R code and related pathway files. As an example, we analyzed an RNA-Seq dataset of lung fibroblasts with Hoxa1 knockdown and revealed the possible roles of SP1 and E2F1 and their target genes, including microRNAs, in blocking G1/S transition. In another example, our analysis shows that in mouse B cells without functional p53, ionizing radiation activates the MYC pathway and its downstream genes involved in cell proliferation, ribosome biogenesis, and non-coding RNA metabolism. In wildtype B cells, radiation induces p53-mediated apoptosis and DNA repair while suppressing the target genes of MYC and E2F1, and leads to growth and cell cycle arrest. iDEP helps unveil the multifaceted functions of p53 and the possible involvement of several microRNAs such as miR-92a, miR-504, and miR-30a. In both examples, we validated known molecular pathways and generated novel, testable hypotheses. Combining comprehensive analytic functionalities with massive annotation databases, iDEP ( http://ge-lab.org/idep/ ) enables biologists to easily translate transcriptomic and proteomic data into actionable insights.

Journal ArticleDOI
TL;DR: Rociletinib was active in patients with EGFR-mutated NSCLC associated with the T790M resistance mutation and the only common dose-limiting adverse event was hyperglycemia.
Abstract: BackgroundNon–small-cell lung cancer (NSCLC) with a mutation in the gene encoding epidermal growth factor receptor (EGFR) is sensitive to approved EGFR inhibitors, but resistance develops, mediated by the T790M EGFR mutation in most cases. Rociletinib (CO-1686) is an EGFR inhibitor active in preclinical models of EGFR-mutated NSCLC with or without T790M. MethodsIn this phase 1–2 study, we administered rociletinib to patients with EGFR-mutated NSCLC who had disease progression during previous treatment with an existing EGFR inhibitor. In the expansion (phase 2) part of the study, patients with T790M-positive disease received rociletinib at a dose of 500 mg twice daily, 625 mg twice daily, or 750 mg twice daily. Key objectives were assessment of safety, side-effect profile, pharmacokinetics, and preliminary antitumor activity of rociletinib. Tumor biopsies to identify T790M were performed during screening. Treatment was administered in continuous 21-day cycles. ResultsA total of 130 patients were enrolled. ...

Journal ArticleDOI
TL;DR: Curcumin, a yellow pigment in the Indian spice Turmeric (Curcuma longa), which is chemically known as diferuloylmethane, was first isolated exactly two centuries ago in 1815 by two German Scientists, Vogel and Pelletier.
Abstract: Curcumin, a yellow pigment in the Indian spice Turmeric (Curcuma longa), which is chemically known as diferuloylmethane, was first isolated exactly two centuries ago in 1815 by two German Scientists, Vogel and Pelletier. However, according to the pubmed database, the first study on its biological activity as an antibacterial agent was published in 1949 in Nature and the first clinical trial was reported in The Lancet in 1937. Although the current database indicates almost 9000 publications on curcumin, until 1990 there were less than 100 papers published on this nutraceutical. At the molecular level, this multitargeted agent has been shown to exhibit anti-inflammatory activity through the suppression of numerous cell signalling pathways including NF-κB, STAT3, Nrf2, ROS and COX-2. Numerous studies have indicated that curcumin is a highly potent antimicrobial agent and has been shown to be active against various chronic diseases including various types of cancers, diabetes, obesity, cardiovascular, pulmonary, neurological and autoimmune diseases. Furthermore, this compound has also been shown to be synergistic with other nutraceuticals such as resveratrol, piperine, catechins, quercetin and genistein. To date, over 100 different clinical trials have been completed with curcumin, which clearly show its safety, tolerability and its effectiveness against various chronic diseases in humans. However, more clinical trials in different populations are necessary to prove its potential against different chronic diseases in humans. This review's primary focus is on lessons learnt about curcumin from clinical trials. Linked Articles This article is part of a themed section on Principles of Pharmacological Research of Nutraceuticals. To view the other articles in this section visit http://onlinelibrary.wiley.com/doi/10.1111/bph.v174.11/issuetoc

Journal ArticleDOI
Andres Castellanos-Gomez1
TL;DR: The recent isolation of atomically thin black phosphorus by mechanical exfoliation of bulk layered crystals has triggered an unprecedented interest, even higher than that raised by the first works on graphene and other two-dimensionals, in the nanoscience and nanotechnology community.
Abstract: The recent isolation of atomically thin black phosphorus by mechanical exfoliation of bulk layered crystals has triggered an unprecedented interest, even higher than that raised by the first works on graphene and other two-dimensionals, in the nanoscience and nanotechnology community. In this Perspective, we critically analyze the reasons behind the surge of experimental and theoretical works on this novel two-dimensional material. We believe that the fact that black phosphorus band gap value spans over a wide range of the electromagnetic spectrum (interesting for thermal imaging, thermoelectrics, fiber optics communication, photovoltaics, etc.) that was not covered by any other two-dimensional material isolated to date, its high carrier mobility, its ambipolar field-effect, and its rather unusual in-plane anisotropy drew the attention of the scientific community toward this two-dimensional material. Here, we also review the current advances, the future directions and the challenges in this young research...

Posted Content
TL;DR: This paper proposed normalized advantage functions (NAF) as an alternative to the more commonly used policy gradient and actor-critic methods to accelerate model-free reinforcement learning for continuous control tasks.
Abstract: Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of model-free algorithms, particularly when using high-dimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore algorithms and representations to reduce the sample complexity of deep reinforcement learning for continuous control tasks. We propose two complementary techniques for improving the efficiency of such algorithms. First, we derive a continuous variant of the Q-learning algorithm, which we call normalized adantage functions (NAF), as an alternative to the more commonly used policy gradient and actor-critic methods. NAF representation allows us to apply Q-learning with experience replay to continuous tasks, and substantially improves performance on a set of simulated robotic control tasks. To further improve the efficiency of our approach, we explore the use of learned models for accelerating model-free reinforcement learning. We show that iteratively refitted local linear models are especially effective for this, and demonstrate substantially faster learning on domains where such models are applicable.

Proceedings ArticleDOI
22 Jul 2019
TL;DR: Wang et al. as discussed by the authors proposed a two-stage 3D object detection framework, named sparse-to-dense 3D Object Detector (STD), which uses raw point clouds as input to generate accurate proposals by seeding each point with a new spherical anchor.
Abstract: We propose a two-stage 3D object detection framework, named sparse-to-dense 3D Object Detector (STD). The first stage is a bottom-up proposal generation network that uses raw point clouds as input to generate accurate proposals by seeding each point with a new spherical anchor. It achieves a higher recall with less computation compared with prior works. Then, PointsPool is applied for proposal feature generation by transforming interior point features from sparse expression to compact representation, which saves even more computation. In box prediction, which is the second stage, we implement a parallel intersection-over-union (IoU) branch to increase awareness of localization accuracy, resulting in further improved performance. We conduct experiments on KITTI dataset, and evaluate our method on 3D object and Bird’s Eye View (BEV) detection. Our method outperforms other methods by a large margin, especially on the hard set, with 10+ FPS inference speed.

Journal ArticleDOI
TL;DR: In this article, a review of dye-sensitized solar cells (DSSCs) and their key components, including the photoanode, sensitizer, electrolyte and counter electrode, is presented.

Journal ArticleDOI
TL;DR: Author(s): Saran, Rajiv; Robinson, Bruce; Abbott, Kevin C; Agodoa, Lawrence YC; Bhave, Nicole; Bragg-Gresham, Jennifer; Balkrishnan, Rajesh; Dietrich, Xue; Eckard, Ashley; Eggers, Paul W; Gaipov, Abduzhappar; Gillen, Daniel; Gipson, Debbie; Hailpern, Susan M; Hall, Yoshio N.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4, Baptiste Abeloos5, Rosemarie Aben6, Ossama AbouZeid7, N. L. Abraham8, Halina Abramowicz9, Henso Abreu10, Ricardo Abreu11, Yiming Abulaiti12, Bobby Samir Acharya13, Bobby Samir Acharya14, Leszek Adamczyk15, David H. Adams16, Jahred Adelman17, Stefanie Adomeit18, Tim Adye19, A. A. Affolder20, Tatjana Agatonovic-Jovin21, Johannes Agricola22, Juan Antonio Aguilar-Saavedra23, Steven Ahlen24, Faig Ahmadov25, Faig Ahmadov4, Giulio Aielli26, Henrik Akerstedt12, T. P. A. Åkesson27, Andrei Akimov, Gian Luigi Alberghi28, Justin Albert29, S. Albrand30, M. J. Alconada Verzini31, Martin Aleksa32, Igor Aleksandrov25, Calin Alexa, Gideon Alexander9, Theodoros Alexopoulos33, Muhammad Alhroob2, Malik Aliev34, Gianluca Alimonti, John Alison35, Steven Patrick Alkire36, Bmm Allbrooke8, Benjamin William Allen11, Phillip Allport37, Alberto Aloisio38, Alejandro Alonso39, Francisco Alonso31, Cristiano Alpigiani40, Mahmoud Alstaty1, B. Alvarez Gonzalez32, D. Álvarez Piqueras41, Mariagrazia Alviggi38, Brian Thomas Amadio42, K. Amako, Y. Amaral Coutinho43, Christoph Amelung44, D. Amidei45, S. P. Amor Dos Santos46, António Amorim47, Simone Amoroso32, Glenn Amundsen44, Christos Anastopoulos48, Lucian Stefan Ancu49, Nansi Andari17, Timothy Andeen50, Christoph Falk Anders51, G. Anders32, John Kenneth Anders20, Kelby Anderson35, Attilio Andreazza52, Andrei51, Stylianos Angelidakis53, Ivan Angelozzi6, Philipp Anger54, Aaron Angerami36, Francis Anghinolfi32, Alexey Anisenkov55, Nuno Anjos56 
Aix-Marseille University1, University of Oklahoma2, University of Iowa3, Azerbaijan National Academy of Sciences4, Université Paris-Saclay5, University of Amsterdam6, University of California, Santa Cruz7, University of Sussex8, Tel Aviv University9, Technion – Israel Institute of Technology10, University of Oregon11, Stockholm University12, King's College London13, International Centre for Theoretical Physics14, AGH University of Science and Technology15, Brookhaven National Laboratory16, Northern Illinois University17, Ludwig Maximilian University of Munich18, Rutherford Appleton Laboratory19, University of Liverpool20, University of Belgrade21, University of Göttingen22, University of Granada23, Boston University24, Joint Institute for Nuclear Research25, University of Rome Tor Vergata26, Lund University27, University of Bologna28, University of Victoria29, University of Grenoble30, National University of La Plata31, CERN32, National Technical University of Athens33, University of Salento34, University of Chicago35, Columbia University36, University of Birmingham37, University of Naples Federico II38, University of Copenhagen39, University of Washington40, University of Valencia41, Lawrence Berkeley National Laboratory42, Federal University of Rio de Janeiro43, Brandeis University44, University of Michigan45, University of Coimbra46, University of Lisbon47, University of Sheffield48, University of Geneva49, University of Texas at Austin50, Heidelberg University51, University of Milan52, National and Kapodistrian University of Athens53, Dresden University of Technology54, Novosibirsk State University55, IFAE56
TL;DR: In this article, a combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and fermions, are presented.
Abstract: Combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and fermions, are presented. The combination is based on the analysis of five production processes, namely gluon fusion, vector boson fusion, and associated production with a $W$ or a $Z$ boson or a pair of top quarks, and of the six decay modes $H \to ZZ, WW$, $\gamma\gamma, \tau\tau, bb$, and $\mu\mu$. All results are reported assuming a value of 125.09 GeV for the Higgs boson mass, the result of the combined measurement by the ATLAS and CMS experiments. The analysis uses the CERN LHC proton--proton collision data recorded by the ATLAS and CMS experiments in 2011 and 2012, corresponding to integrated luminosities per experiment of approximately 5 fb$^{-1}$ at $\sqrt{s}=7$ TeV and 20 fb$^{-1}$ at $\sqrt{s} = 8$ TeV. The Higgs boson production and decay rates measured by the two experiments are combined within the context of three generic parameterisations: two based on cross sections and branching fractions, and one on ratios of coupling modifiers. Several interpretations of the measurements with more model-dependent parameterisations are also given. The combined signal yield relative to the Standard Model prediction is measured to be 1.09 $\pm$ 0.11. The combined measurements lead to observed significances for the vector boson fusion production process and for the $H \to \tau\tau$ decay of $5.4$ and $5.5$ standard deviations, respectively. The data are consistent with the Standard Model predictions for all parameterisations considered.

Journal ArticleDOI
TL;DR: This review highlights various design and synthesis approaches toward the construction of ZMOFs, which are metal-organic frameworks (MOFs) with topologies and, in some cases, features akin to traditional inorganic zeolites.
Abstract: This review highlights various design and synthesis approaches toward the construction of ZMOFs, which are metal–organic frameworks (MOFs) with topologies and, in some cases, features akin to traditional inorganic zeolites. The interest in this unique subset of MOFs is correlated with their exceptional characteristics arising from the periodic pore systems and distinctive cage-like cavities, in conjunction with modular intra- and/or extra-framework components, which ultimately allow for tailoring of the pore size, pore shape, and/or properties towards specific applications.

Proceedings ArticleDOI
24 Oct 2016
TL;DR: The largest and most detailed measurement of online tracking conducted to date, based on a crawl of the top 1 million websites, is presented, which demonstrates the OpenWPM platform's strength in enabling researchers to rapidly detect, quantify, and characterize emerging online tracking behaviors.
Abstract: We present the largest and most detailed measurement of online tracking conducted to date, based on a crawl of the top 1 million websites. We make 15 types of measurements on each site, including stateful (cookie-based) and stateless (fingerprinting-based) tracking, the effect of browser privacy tools, and the exchange of tracking data between different sites ("cookie syncing"). Our findings include multiple sophisticated fingerprinting techniques never before measured in the wild. This measurement is made possible by our open-source web privacy measurement tool, OpenWPM, which uses an automated version of a full-fledged consumer browser. It supports parallelism for speed and scale, automatic recovery from failures of the underlying browser, and comprehensive browser instrumentation. We demonstrate our platform's strength in enabling researchers to rapidly detect, quantify, and characterize emerging online tracking behaviors.

Journal ArticleDOI
TL;DR: The transcatheter pacemaker implanted in patients who had guideline-based indications for ventricular pacing met the prespecified safety and efficacy goals; it had a safety profile similar to that of a transvenous system while providing low and stable pacing thresholds.
Abstract: BackgroundA leadless intracardiac transcatheter pacing system has been designed to avoid the need for a pacemaker pocket and transvenous lead. MethodsIn a prospective multicenter study without controls, a transcatheter pacemaker was implanted in patients who had guideline-based indications for ventricular pacing. The analysis of the primary end points began when 300 patients reached 6 months of follow-up. The primary safety end point was freedom from system-related or procedure-related major complications. The primary efficacy end point was the percentage of patients with low and stable pacing capture thresholds at 6 months (≤2.0 V at a pulse width of 0.24 msec and an increase of ≤1.5 V from the time of implantation). The safety and efficacy end points were evaluated against performance goals (based on historical data) of 83% and 80%, respectively. We also performed a post hoc analysis in which the rates of major complications were compared with those in a control cohort of 2667 patients with transvenous ...

Journal ArticleDOI
14 Jan 2016-Nature
TL;DR: Traits generate trade-offs between performance with competition versus performance without competition, a fundamental ingredient in the classical hypothesis that the coexistence of plant species is enabled via differentiation in their successional strategies.
Abstract: Phenotypic traits and their associated trade-offs have been shown to have globally consistent effects on individual plant physiological functions, but how these effects scale up to influence competition, a key driver of community assembly in terrestrial vegetation, has remained unclear. Here we use growth data from more than 3 million trees in over 140,000 plots across the world to show how three key functional traits--wood density, specific leaf area and maximum height--consistently influence competitive interactions. Fast maximum growth of a species was correlated negatively with its wood density in all biomes, and positively with its specific leaf area in most biomes. Low wood density was also correlated with a low ability to tolerate competition and a low competitive effect on neighbours, while high specific leaf area was correlated with a low competitive effect. Thus, traits generate trade-offs between performance with competition versus performance without competition, a fundamental ingredient in the classical hypothesis that the coexistence of plant species is enabled via differentiation in their successional strategies. Competition within species was stronger than between species, but an increase in trait dissimilarity between species had little influence in weakening competition. No benefit of dissimilarity was detected for specific leaf area or wood density, and only a weak benefit for maximum height. Our trait-based approach to modelling competition makes generalization possible across the forest ecosystems of the world and their highly diverse species composition.

Posted Content
TL;DR: This work trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation, demonstrating that convnets can be used to solve complicated out of image plane regression problems.
Abstract: We present a robust and real-time monocular six degree of freedom relocalization system. Our system trains a convolutional neural network to regress the 6-DOF camera pose from a single RGB image in an end-to-end manner with no need of additional engineering or graph optimisation. The algorithm can operate indoors and outdoors in real time, taking 5ms per frame to compute. It obtains approximately 2m and 6 degree accuracy for large scale outdoor scenes and 0.5m and 10 degree accuracy indoors. This is achieved using an efficient 23 layer deep convnet, demonstrating that convnets can be used to solve complicated out of image plane regression problems. This was made possible by leveraging transfer learning from large scale classification data. We show the convnet localizes from high level features and is robust to difficult lighting, motion blur and different camera intrinsics where point based SIFT registration fails. Furthermore we show how the pose feature that is produced generalizes to other scenes allowing us to regress pose with only a few dozen training examples. PoseNet code, dataset and an online demonstration is available on our project webpage, at this http URL

Journal ArticleDOI
TL;DR: The aim of the ecospat package is to make available novel tools and methods to support spatial analyses and modeling of species niches and distributions in a coherent workflow and stimulate the use of comprehensive approaches in spatial modelling of species and community distributions.
Abstract: The aim of the ecospat package is to make available novel tools and methods to support spatial analyses and modeling of species niches and distributions in a coherent workflow. The package is written in the R language (R Development Core Team 2016) and contains several features, unique in their implementation, that are complementary to other existing R packages. Pre-modeling analyses include species niche quantifications and comparisons between distinct ranges or time periods, measures of phylogenetic diversity, and other data exploration functionalities (e.g. extrapolation detection, ExDet). Core modeling brings together the new approach of Ensemble of Small Models (ESM) and various implementations of the spatially-explicit modeling of species assemblages (SESAM) framework. Post-modeling analyses include evaluation of species predictions based on presence-only data (Boyce index) and of community predictions, phylogenetic diversity and environmentally-constrained species co-occurrences analyses. The ecospat package also provides some functions to supplement the biomod2 package (e.g. data preparation, permutation tests and cross-validation of model predictive power). With this novel package, we intend to stimulate the use of comprehensive approaches in spatial modelling of species and community distributions. This article is protected by copyright. All rights reserved.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: A novel attention guided network which selectively integrates multi-level contextual information in a progressive manner and introduces multi-path recurrent feedback to enhance this proposed progressive attention driven framework.
Abstract: Effective convolutional features play an important role in saliency estimation but how to learn powerful features for saliency is still a challenging task. FCN-based methods directly apply multi-level convolutional features without distinction, which leads to sub-optimal results due to the distraction from redundant details. In this paper, we propose a novel attention guided network which selectively integrates multi-level contextual information in a progressive manner. Attentive features generated by our network can alleviate distraction of background thus achieve better performance. On the other hand, it is observed that most of existing algorithms conduct salient object detection by exploiting side-output features of the backbone feature extraction network. However, shallower layers of backbone network lack the ability to obtain global semantic information, which limits the effective feature learning. To address the problem, we introduce multi-path recurrent feedback to enhance our proposed progressive attention driven framework. Through multi-path recurrent connections, global semantic information from the top convolutional layer is transferred to shallower layers, which intrinsically refines the entire network. Experimental results on six benchmark datasets demonstrate that our algorithm performs favorably against the state-of-the-art approaches.

Journal ArticleDOI
TL;DR: This large-scale field experiment disentangle the effects of genotype, environment, age and year of harvest on bacterial communities associated with leaves and roots of Boechera stricta (Brassicaceae), a perennial wild mustard to demonstrate how genotype-by-environment interactions contribute to the complexity of microbiome assembly in natural environments.
Abstract: Bacteria living on and in leaves and roots influence many aspects of plant health, so the extent of a plant's genetic control over its microbiota is of great interest to crop breeders and evolutionary biologists. Laboratory-based studies, because they poorly simulate true environmental heterogeneity, may misestimate or totally miss the influence of certain host genes on the microbiome. Here we report a large-scale field experiment to disentangle the effects of genotype, environment, age and year of harvest on bacterial communities associated with leaves and roots of Boechera stricta (Brassicaceae), a perennial wild mustard. Host genetic control of the microbiome is evident in leaves but not roots, and varies substantially among sites. Microbiome composition also shifts as plants age. Furthermore, a large proportion of leaf bacterial groups are shared with roots, suggesting inoculation from soil. Our results demonstrate how genotype-by-environment interactions contribute to the complexity of microbiome assembly in natural environments.

Book ChapterDOI
01 Jan 2016
TL;DR: In this article, the authors survey recent advances in algorithms for route planning in transportation networks, and show that one can compute driving directions in milliseconds or less even at continental scale for road networks, while others can deal efficiently with real-time traffic.
Abstract: We survey recent advances in algorithms for route planning in transportation networks. For road networks, we show that one can compute driving directions in milliseconds or less even at continental scale. A variety of techniques provide different trade-offs between preprocessing effort, space requirements, and query time. Some algorithms can answer queries in a fraction of a microsecond, while others can deal efficiently with real-time traffic. Journey planning on public transportation systems, although conceptually similar, is a significantly harder problem due to its inherent time-dependent and multicriteria nature. Although exact algorithms are fast enough for interactive queries on metropolitan transit systems, dealing with continent-sized instances requires simplifications or heavy preprocessing. The multimodal route planning problem, which seeks journeys combining schedule-based transportation (buses, trains) with unrestricted modes (walking, driving), is even harder, relying on approximate solutions even for metropolitan inputs.