scispace - formally typeset
Search or ask a question
Browse all papers

Posted Content
Jonas Gehring1, Michael Auli1, David Grangier1, Denis Yarats1, Yann N. Dauphin1 
TL;DR: The authors introduced an architecture based entirely on convolutional neural networks, where computations over all elements can be fully parallelized during training and optimization is easier since the number of nonlinearities is fixed and independent of the input length.
Abstract: The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.

1,189 citations


Journal ArticleDOI
22 Jun 2020-Science
TL;DR: The epitope of 4A8 is defined as the N-terminal domain (NTD) of the S protein by determining with cryo–eletron microscopy its structure in complex with the Sprotein, which points to the NTD as a promising target for therapeutic mAbs against COVID-19.
Abstract: Developing therapeutics against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) could be guided by the distribution of epitopes, not only on the receptor binding domain (RBD) of the Spike (S) protein but also across the full Spike (S) protein We isolated and characterized monoclonal antibodies (mAbs) from 10 convalescent COVID-19 patients Three mAbs showed neutralizing activities against authentic SARS-CoV-2 One mAb, named 4A8, exhibits high neutralization potency against both authentic and pseudotyped SARS-CoV-2 but does not bind the RBD We defined the epitope of 4A8 as the N-terminal domain (NTD) of the S protein by determining with cryo-eletron microscopy its structure in complex with the S protein to an overall resolution of 31 angstroms and local resolution of 33 angstroms for the 4A8-NTD interface This points to the NTD as a promising target for therapeutic mAbs against COVID-19

1,189 citations


Journal ArticleDOI
TL;DR: The new domain architecture search tool is described and the process of mapping of Gene Ontology terms to InterPro is outlined, and the challenges faced by the resource given the explosive growth in sequence data in recent years are discussed.
Abstract: The InterPro database (http://www.ebi.ac.uk/interpro/) is a freely available resource that can be used to classify sequences into protein families and to predict the presence of important domains and sites. Central to the InterPro database are predictive models, known as signatures, from a range of different protein family databases that have different biological focuses and use different methodological approaches to classify protein families and domains. InterPro integrates these signatures, capitalizing on the respective strengths of the individual databases, to produce a powerful protein classification resource. Here, we report on the status of InterPro as it enters its 15th year of operation, and give an overview of new developments with the database and its associated Web interfaces and software. In particular, the new domain architecture search tool is described and the process of mapping of Gene Ontology terms to InterPro is outlined. We also discuss the challenges faced by the resource given the explosive growth in sequence data in recent years. InterPro (version 48.0) contains 36 766 member database signatures integrated into 26 238 InterPro entries, an increase of over 3993 entries (5081 signatures), since 2012.

1,189 citations


Journal ArticleDOI
B. P. Abbott1, R. Abbott1, T. D. Abbott2, Sheelu Abraham3  +1271 moreInstitutions (145)
TL;DR: In 2019, the LIGO Livingston detector observed a compact binary coalescence with signal-to-noise ratio 12.9 and the Virgo detector was also taking data that did not contribute to detection due to a low SINR but were used for subsequent parameter estimation as discussed by the authors.
Abstract: On 2019 April 25, the LIGO Livingston detector observed a compact binary coalescence with signal-to-noise ratio 12.9. The Virgo detector was also taking data that did not contribute to detection due to a low signal-to-noise ratio, but were used for subsequent parameter estimation. The 90% credible intervals for the component masses range from to if we restrict the dimensionless component spin magnitudes to be smaller than 0.05). These mass parameters are consistent with the individual binary components being neutron stars. However, both the source-frame chirp mass and the total mass of this system are significantly larger than those of any other known binary neutron star (BNS) system. The possibility that one or both binary components of the system are black holes cannot be ruled out from gravitational-wave data. We discuss possible origins of the system based on its inconsistency with the known Galactic BNS population. Under the assumption that the signal was produced by a BNS coalescence, the local rate of neutron star mergers is updated to 250-2810.

1,189 citations


Journal ArticleDOI
TL;DR: It is indicated that healthcare staff involved in the care of COVID-19 positive patients, and individuals considering themselves at risk of disease, were more likely to self-report acquiescence to CO VID-19 vaccination if and when available, and parents, nurses, and medical workers not caring for SARS-CoV-2 positive patients expressed higher levels of vaccine hesitancy.
Abstract: Vaccine hesitancy remains a barrier to full population inoculation against highly infectious diseases. Coincident with the rapid developments of COVID-19 vaccines globally, concerns about the safety of such a vaccine could contribute to vaccine hesitancy. We analyzed 1941 anonymous questionnaires completed by healthcare workers and members of the general Israeli population, regarding acceptance of a potential COVID-19 vaccine. Our results indicate that healthcare staff involved in the care of COVID-19 positive patients, and individuals considering themselves at risk of disease, were more likely to self-report acquiescence to COVID-19 vaccination if and when available. In contrast, parents, nurses, and medical workers not caring for SARS-CoV-2 positive patients expressed higher levels of vaccine hesitancy. Interventional educational campaigns targeted towards populations at risk of vaccine hesitancy are therefore urgently needed to combat misinformation and avoid low inoculation rates.

1,188 citations


Journal ArticleDOI
TL;DR: A definition of keystone taxa in microbial ecology is proposed and over 200 microbial keystoneTaxa that have been identified in soil, plant and marine ecosystems, as well as in the human microbiome are summarized.
Abstract: Microorganisms have a pivotal role in the functioning of ecosystems. Recent studies have shown that microbial communities harbour keystone taxa, which drive community composition and function irrespective of their abundance. In this Opinion article, we propose a definition of keystone taxa in microbial ecology and summarize over 200 microbial keystone taxa that have been identified in soil, plant and marine ecosystems, as well as in the human microbiome. We explore the importance of keystone taxa and keystone guilds for microbiome structure and functioning and discuss the factors that determine their distribution and activities.

1,188 citations


Journal ArticleDOI
TL;DR: In this article, a novel neural network-based traffic forecasting method, the temporal graph convolutional network (T-GCN) model, which is combined with the graph convolutionsal network and the gated recurrent unit (GRU), is proposed.
Abstract: Accurate and real-time traffic forecasting plays an important role in the intelligent traffic system and is of great significance for urban traffic planning, traffic management, and traffic control. However, traffic forecasting has always been considered an “open” scientific issue, owing to the constraints of urban road network topological structure and the law of dynamic change with time. To capture the spatial and temporal dependences simultaneously, we propose a novel neural network-based traffic forecasting method, the temporal graph convolutional network (T-GCN) model, which is combined with the graph convolutional network (GCN) and the gated recurrent unit (GRU). Specifically, the GCN is used to learn complex topological structures for capturing spatial dependence and the gated recurrent unit is used to learn dynamic changes of traffic data for capturing temporal dependence. Then, the T-GCN model is employed to traffic forecasting based on the urban road network. Experiments demonstrate that our T-GCN model can obtain the spatio-temporal correlation from traffic data and the predictions outperform state-of-art baselines on real-world traffic datasets. Our tensorflow implementation of the T-GCN is available at https://www.github.com/lehaifeng/T-GCN .

1,188 citations


Proceedings ArticleDOI
25 Jun 2015
TL;DR: It is argued that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer.
Abstract: Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the layers and the input and output variables. Using this representation we can calculate the optimal information theoretic limits of the DNN and obtain finite sample generalization bounds. The advantage of getting closer to the theoretical limit is quantifiable both by the generalization bound and by the network's simplicity. We argue that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer. The hierarchical representations at the layered network naturally correspond to the structural phase transitions along the information curve. We believe that this new insight can lead to new optimality bounds and deep learning algorithms.

1,187 citations


Journal ArticleDOI
TL;DR: Compared to other decision support tools, the STARTEC-tool is product-specific and multidisciplinary and includes interpretation and targeted recommendations for end-users.
Abstract: A prototype decision support IT-tool for the food industry was developed in the STARTEC project. Typical processes and decision steps were mapped using real life production scenarios of participating food companies manufacturing complex ready-to-eat foods. Companies looked for a more integrated approach when making food safety decisions that would align with existing HACCP systems. The tool was designed with shelf life assessments and data on safety, quality, and costs, using a pasta salad meal as a case product. The process flow chart was used as starting point, with simulation options at each process step. Key parameters like pH, water activity, costs of ingredients and salaries, and default models for calculations of Listeria monocytogenes, quality scores, and vitamin C, were placed in an interactive database. Customization of the models and settings was possible on the user-interface. The simulation module outputs were provided as detailed curves or categorized as "good"; "sufficient"; or "corrective action needed" based on threshold limit values set by the user. Possible corrective actions were suggested by the system. The tool was tested and approved by end-users based on selected ready-to-eat food products. Compared to other decision support tools, the STARTEC-tool is product-specific and multidisciplinary and includes interpretation and targeted recommendations for end-users.

1,187 citations


Posted Content
TL;DR: This work introduces a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks, and applies the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora.
Abstract: The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.

1,187 citations


Posted Content
TL;DR: This work proposes two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingsual language model objective.
Abstract: Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.

Proceedings ArticleDOI
17 Apr 2015
TL;DR: A summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it are presented.
Abstract: Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines. It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior. We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.

Journal ArticleDOI
TL;DR: In individuals identified by screening as at risk of malnutrition, the diagnosis of malnutrition should be based on either a low BMI (<18.5 kg/m(2)), or on the combined finding of weight loss together with either reduced BMI (age-specific) or a low FFMI using sex-specific cut-offs.


Journal ArticleDOI
TL;DR: Ibrutinib was superior to chlorambucil in previously untreated patients with CLL or small lymphocytic lymphoma, as assessed by progression-free survival, overall survival, response rate, and improvement in hematologic variables.
Abstract: BACKGROUND Chronic lymphocytic leukemia (CLL) primarily affects older persons who often have coexisting conditions in addition to disease-related immunosuppression and myelosuppression. We conducted an international, open-label, randomized phase 3 trial to compare two oral agents, ibrutinib and chlorambucil, in previously untreated older patients with CLL or small lymphocytic lymphoma. METHODS We randomly assigned 269 previously untreated patients who were 65 years of age or older and had CLL or small lymphocytic lymphoma to receive ibrutinib or chlorambucil. The primary end point was progression-free survival as assessed by an independent review committee. RESULTS The median age of the patients was 73 years. During a median follow-up period of 18.4 months, ibrutinib resulted in significantly longer progression-free survival than did chlorambucil (median, not reached vs. 18.9 months), with a risk of progression or death that was 84% lower with ibrutinib than that with chlorambucil (hazard ratio, 0.16; P<0.001). Ibrutinib significantly prolonged overall survival; the estimated survival rate at 24 months was 98% with ibrutinib versus 85% with chlorambucil, with a relative risk of death that was 84% lower in the ibrutinib group than in the chlorambucil group (hazard ratio, 0.16; P = 0.001). The overall response rate was higher with ibrutinib than with chlorambucil (86% vs. 35%, P<0.001). The rates of sustained increases from baseline values in the hemoglobin and platelet levels were higher with ibrutinib. Adverse events of any grade that occurred in at least 20% of the patients receiving ibrutinib included diarrhea, fatigue, cough, and nausea; adverse events occurring in at least 20% of those receiving chlorambucil included nausea, fatigue, neutropenia, anemia, and vomiting. In the ibrutinib group, four patients had a grade 3 hemorrhage and one had a grade 4 hemorrhage. A total of 87% of the patients in the ibrutinib group are continuing to take ibrutinib. CONCLUSIONS Ibrutinib was superior to chlorambucil in previously untreated patients with CLL or small lymphocytic lymphoma, as assessed by progression-free survival, overall survival, response rate, and improvement in hematologic variables. (Funded by Pharmacyclics and others; RESONATE-2 ClinicalTrials.gov number, NCT01722487.)

Journal ArticleDOI
TL;DR: Ncorr is an open-source subset-based 2D DIC package that amalgamates modern DIC algorithms proposed in the literature with additional enhancements and several applications of Ncorr that both validate it and showcase its capabilities are discussed.
Abstract: Digital Image Correlation (DIC) is an important and widely used non-contact technique for measuring material deformation. Considerable progress has been made in recent decades in both developing new experimental DIC techniques and in enhancing the performance of the relevant computational algorithms. Despite this progress, there is a distinct lack of a freely available, high-quality, flexible DIC software. This paper documents a new DIC software package Ncorr that is meant to fill that crucial gap. Ncorr is an open-source subset-based 2D DIC package that amalgamates modern DIC algorithms proposed in the literature with additional enhancements. Several applications of Ncorr that both validate it and showcase its capabilities are discussed.

Book ChapterDOI
08 Oct 2018
TL;DR: Can people use the Internet to find community? Can online relationships between people who never see, smell, or hear each other be supportive and intimate? as mentioned in this paper investigates whether online relationships can be intimate.
Abstract: Can people use the Internet to find community? Can online relationships between people who never see, smell, or hear each other be supportive and intimate?

Proceedings ArticleDOI
27 Jun 2016
TL;DR: This paper proposes three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks and presents a convolutional network for real-time disparity estimation that provides state-of-the-art results.
Abstract: Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.

Journal ArticleDOI
TL;DR: The most comprehensive and most highly resolved economic input–output framework of the world economy together with a detailed database of global material flows are used to calculate the full material requirements of all countries covering a period of two decades and demonstrate that countries’ use of nondomestic resources is about threefold larger than the physical quantity of traded goods.
Abstract: Metrics on resource productivity currently used by governments suggest that some developed countries have increased the use of natural resources at a slower rate than economic growth (relative decoupling) or have even managed to use fewer resources over time (absolute decoupling). Using the material footprint (MF), a consumption-based indicator of resource use, we find the contrary: Achievements in decoupling in advanced economies are smaller than reported or even nonexistent. We present a time series analysis of the MF of 186 countries and identify material flows associated with global production and consumption networks in unprecedented specificity. By calculating raw material equivalents of international trade, we demonstrate that countries’ use of nondomestic resources is, on average, about threefold larger than the physical quantity of traded goods. As wealth grows, countries tend to reduce their domestic portion of materials extraction through international trade, whereas the overall mass of material consumption generally increases. With every 10% increase in gross domestic product, the average national MF increases by 6%. Our findings call into question the sole use of current resource productivity indicators in policy making and suggest the necessity of an additional focus on consumption-based accounting for natural resource use.

Journal ArticleDOI
TL;DR: The risk of liver‐related mortality increases exponentially with increase in fibrosis stage; these data have important implications in assessing the utility of each stage and benefits of regression of fibrosis from one stage to another.

Journal ArticleDOI
01 Sep 2015-Gut
TL;DR: A global consensus for gastritis was developed for the first time, which will be the basis for an international classification system and for further research on the subject.
Abstract: Objective To present results of the Kyoto Global Consensus Meeting, which was convened to develop global consensus on (1) classification of chronic gastritis and duodenitis, (2) clinical distinction of dyspepsia caused by Helicobacter pylori from functional dyspepsia, (3) appropriate diagnostic assessment of gastritis and (4) when, whom and how to treat H. pylori gastritis. Design Twenty-three clinical questions addressing the above-mentioned four domains were drafted for which expert panels were asked to formulate relevant statements. A Delphi method using an anonymous electronic system was adopted to develop the consensus, the level of which was predefined as ≥80%. Final modifications of clinical questions and consensus were achieved at the face-to-face meeting in Kyoto. Results All 24 statements for 22 clinical questions after extensive modifications and omission of one clinical question were achieved with a consensus level of >80%. To better organise classification of gastritis and duodenitis based on aetiology, a new classification of gastritis and duodenitis is recommended for the 11th international classification. A new category of H. pylori -associated dyspepsia together with a diagnostic algorithm was proposed. The adoption of grading systems for gastric cancer risk stratification, and modern image-enhancing endoscopy for the diagnosis of gastritis, were recommended. Treatment to eradicate H. pylori infection before preneoplastic changes develop, if feasible, was recommended to minimise the risk of more serious complications of the infection. Conclusions A global consensus for gastritis was developed for the first time, which will be the basis for an international classification system and for further research on the subject.

Journal ArticleDOI
21 Apr 2017
TL;DR: A novel sub-operational-taxonomic-unit (sOTU) approach that uses error profiles to obtain putative error-free sequences from Illumina MiSeq and HiSeq sequencing platforms, Deblur, which substantially reduces computational demands relative to similar sOTU methods and does so with similar or better sensitivity and specificity.
Abstract: High-throughput sequencing of 16S ribosomal RNA gene amplicons has facilitated understanding of complex microbial communities, but the inherent noise in PCR and DNA sequencing limits differentiation of closely related bacteria. Although many scientific questions can be addressed with broad taxonomic profiles, clinical, food safety, and some ecological applications require higher specificity. Here we introduce a novel sub-operational-taxonomic-unit (sOTU) approach, Deblur, that uses error profiles to obtain putative error-free sequences from Illumina MiSeq and HiSeq sequencing platforms. Deblur substantially reduces computational demands relative to similar sOTU methods and does so with similar or better sensitivity and specificity. Using simulations, mock mixtures, and real data sets, we detected closely related bacterial sequences with single nucleotide differences while removing false positives and maintaining stability in detection, suggesting that Deblur is limited only by read length and diversity within the amplicon sequences. Because Deblur operates on a per-sample level, it scales to modern data sets and meta-analyses. To highlight Deblur's ability to integrate data sets, we include an interactive exploration of its application to multiple distinct sequencing rounds of the American Gut Project. Deblur is open source under the Berkeley Software Distribution (BSD) license, easily installable, and downloadable from https://github.com/biocore/deblur. IMPORTANCE Deblur provides a rapid and sensitive means to assess ecological patterns driven by differentiation of closely related taxa. This algorithm provides a solution to the problem of identifying real ecological differences between taxa whose amplicons differ by a single base pair, is applicable in an automated fashion to large-scale sequencing data sets, and can integrate sequencing runs collected over time.

Journal ArticleDOI
27 Aug 2015-Cell
TL;DR: In melanoma patients treated with an immune checkpoint therapy, high viral defense signature expression in tumors significantly associates with durable clinical response and DNMTi treatment sensitizes to anti-CTLA4 therapy in a pre-clinical melanoma model.

Journal ArticleDOI
TL;DR: This work proposes a dynamic nonlinear reaction diffusion model with time-dependent parameters, which preserves the structural simplicity of diffusion models and take only a small number of diffusion steps, which makes the inference procedure extremely fast.
Abstract: Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters ( i.e. , linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD— Trainable Nonlinear Reaction Diffusion . The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.

Journal ArticleDOI
27 Feb 2015-Science
TL;DR: It is shown that N6-methyladenosine (m6A), a messenger RNA (mRNA) modification present on transcripts of pluripotency factors, drives this transition from the pluripotent to the differentiated state.
Abstract: Naive and primed pluripotent states retain distinct molecular properties, yet limited knowledge exists on how their state transitions are regulated. Here, we identify Mettl3, an N(6)-methyladenosine (m(6)A) transferase, as a regulator for terminating murine naive pluripotency. Mettl3 knockout preimplantation epiblasts and naive embryonic stem cells are depleted for m(6)A in mRNAs, yet are viable. However, they fail to adequately terminate their naive state and, subsequently, undergo aberrant and restricted lineage priming at the postimplantation stage, which leads to early embryonic lethality. m(6)A predominantly and directly reduces mRNA stability, including that of key naive pluripotency-promoting transcripts. This study highlights a critical role for an mRNA epigenetic modification in vivo and identifies regulatory modules that functionally influence naive and primed pluripotency in an opposing manner.

Journal ArticleDOI
TL;DR: The steps of a typical single‐cell RNA‐seq analysis, including pre‐processing (quality control, normalization, data correction, feature selection, and dimensionality reduction) and cell‐ and gene‐level downstream analysis, are detailed.
Abstract: Single-cell RNA-seq has enabled gene expression to be studied at an unprecedented resolution. The promise of this technology is attracting a growing user base for single-cell analysis methods. As more analysis tools are becoming available, it is becoming increasingly difficult to navigate this landscape and produce an up-to-date workflow to analyse one's data. Here, we detail the steps of a typical single-cell RNA-seq analysis, including pre-processing (quality control, normalization, data correction, feature selection, and dimensionality reduction) and cell- and gene-level downstream analysis. We formulate current best-practice recommendations for these steps based on independent comparison studies. We have integrated these best-practice recommendations into a workflow, which we apply to a public dataset to further illustrate how these steps work in practice. Our documented case study can be found at https://www.github.com/theislab/single-cell-tutorial This review will serve as a workflow tutorial for new entrants into the field, and help established users update their analysis pipelines.

Journal ArticleDOI
TL;DR: In this paper, the authors review different approaches, technologies, and strategies to manage large-scale schemes of variable renewable electricity such as solar and wind power, considering both supply and demand side measures.
Abstract: The paper reviews different approaches, technologies, and strategies to manage large-scale schemes of variable renewable electricity such as solar and wind power. We consider both supply and demand side measures. In addition to presenting energy system flexibility measures, their importance to renewable electricity is discussed. The flexibility measures available range from traditional ones such as grid extension or pumped hydro storage to more advanced strategies such as demand side management and demand side linked approaches, e.g. the use of electric vehicles for storing excess electricity, but also providing grid support services. Advanced batteries may offer new solutions in the future, though the high costs associated with batteries may restrict their use to smaller scale applications. Different “P2Y”-type of strategies, where P stands for surplus renewable power and Y for the energy form or energy service to which this excess in converted to, e.g. thermal energy, hydrogen, gas or mobility are receiving much attention as potential flexibility solutions, making use of the energy system as a whole. To “functionalize” or to assess the value of the various energy system flexibility measures, these need often be put into an electricity/energy market or utility service context. Summarizing, the outlook for managing large amounts of RE power in terms of options available seems to be promising.

Journal ArticleDOI
TL;DR: These guidelines describe recent recommendations on treatment indications and the choice of modality for ureteral and renal calculi and suggest active treatment of urolithiasis is currently a minimally invasive intervention, with preference for endourologic techniques.

01 Jan 2016
TL;DR: In this article, the authors developed a clinical risk prediction tool for estimating the cumulative six month risk of death and death or myocardial infarction to facilitate triage and management of patients with acute coronary syndrome.
Abstract: Objective To develop a clinical risk prediction tool for estimating the cumulative six month risk of death and death or myocardial infarction to facilitate triage and management of patients with acute coronary syndrome. Design Prospective multinational observational study in which we used multivariable regression to develop a final predictive model, with prospective and external validation. Setting Ninety four hospitals in 14 countries in Europe, North and South America, Australia, and New Zealand. Population 43 810 patients (21 688 in derivation set; 22 122 in validation set) presenting with acute coronary syndrome with or without ST segment elevation enrolled in the global registry of acute coronary events (GRACE) study between April 1999 and September 2005. Main outcome measures Death and myocardial infarction. Results 1989 patients died in hospital, 1466 died between discharge and six month follow-up, and 2793 sustained a new non-fatal myocardial infarction. Nine factors independently predicted death and the combined end point of death or myocardial infarction in the period from admission to six months after discharge: age, development (or history) of heart failure, peripheral vascular disease, systolic blood pressure, Killip class, initial serum creatinine concentration, elevated initial cardiac markers, cardiac arrest on admission, and ST segment deviation. The simplified model was robust, with prospectively validated C-statistics of 0.81 for predicting death and 0.73 for death or myocardial infarction from admission to six months after discharge. The external applicability of the model was validated in the dataset from GUSTO IIb (global use of strategies to open occluded coronary arteries). Conclusions This risk prediction tool uses readily identifiable variables to provide robust prediction of the cumulative six month risk of death or myocardial infarction. It is a rapid and widely applicable method for assessing cardiovascular risk to complement clinical assessment and can guide patient triage and management across the spectrum of patients with acute coronary syndrome.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the underpinnings of the topological band theory and its materials applications, and propose a framework for predicting new classes of topological materials.
Abstract: First-principles band theory, properly augmented by topological considerations, has provided a remarkably successful framework for predicting new classes of topological materials. This Colloquium discusses the underpinnings of the topological band theory and its materials applications.