scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: It is concluded that a causal relationship exists between prenatal Zika virus infection and microcephaly and other serious brain anomalies and needs to intensify efforts toward the prevention of adverse outcomes caused by congenital Zika virus infections.
Abstract: Summary The Zika virus has spread rapidly in the Americas since its first identification in Brazil in early 2015. Prenatal Zika virus infection has been linked to adverse pregnancy and birth outcomes, most notably microcephaly and other serious brain anomalies. To determine whether Zika virus infection during pregnancy causes these adverse outcomes, we evaluated available data using criteria that have been proposed for the assessment of potential teratogens. On the basis of this review, we conclude that a causal relationship exists between prenatal Zika virus infection and microcephaly and other serious brain anomalies. Evidence that was used to support this causal relationship included Zika virus infection at times during prenatal development that were consistent with the defects observed; a specific, rare phenotype involving microcephaly and associated brain anomalies in fetuses or infants with presumed or confirmed congenital Zika virus infection; and data that strongly support biologic plausibility, including the identification of Zika virus in the brain tissue of affected fetuses and infants. Given the recognition of this causal relationship, we need to intensify our efforts toward the prevention of adverse outcomes caused by congenital Zika virus infection. However, many questions that are critical to our prevention efforts remain, including the spectrum of defects caused by prenatal Zika virus infection, the degree of relative and absolute risks of adverse outcomes among fetuses whose mothers were infected at different times during pregnancy, and factors that might affect a woman’s risk of adverse pregnancy or birth outcomes. Addressing these questions will improve our ability to reduce the burden of the effects of Zika virus infection during pregnancy.

1,692 citations


Posted Content
TL;DR: The Microsoft COCO Caption dataset and evaluation server are described and several popular metrics, including BLEU, METEOR, ROUGE and CIDEr are used to score candidate captions.
Abstract: In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.

1,691 citations


Journal ArticleDOI
TL;DR: AntiSMASH as mentioned in this paper is a web server and stand-alone tool for the automatic genomic identification and analysis of biosynthetic gene clusters, available at http://antismash.org.
Abstract: Microbial secondary metabolism constitutes a rich source of antibiotics, chemotherapeutics, insecticides and other high-value chemicals. Genome mining of gene clusters that encode the biosynthetic pathways for these metabolites has become a key methodology for novel compound discovery. In 2011, we introduced antiSMASH, a web server and stand-alone tool for the automatic genomic identification and analysis of biosynthetic gene clusters, available at http://antismash.secondarymetabolites.org. Here, we present version 3.0 of antiSMASH, which has undergone major improvements. A full integration of the recently published ClusterFinder algorithm now allows using this probabilistic algorithm to detect putative gene clusters of unknown types. Also, a new dereplication variant of the ClusterBlast module now identifies similarities of identified clusters to any of 1172 clusters with known end products. At the enzyme level, active sites of key biosynthetic enzymes are now pinpointed through a curated pattern-matching procedure and Enzyme Commission numbers are assigned to functionally classify all enzyme-coding genes. Additionally, chemical structure prediction has been improved by incorporating polyketide reduction states. Finally, in order for users to be able to organize and analyze multiple antiSMASH outputs in a private setting, a new XML output module allows offline editing of antiSMASH annotations within the Geneious software.

1,691 citations


Posted Content
TL;DR: Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference, can achieve better performance than DETR (especially on small objects) with 10$\times less training epochs.
Abstract: DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at this https URL.

1,691 citations


Proceedings ArticleDOI
01 Jan 2016
TL;DR: This work introduces and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences that allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features.
Abstract: The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling.

1,690 citations


Journal ArticleDOI
22 Apr 2016-Science
TL;DR: Proof-of-principle experimental studies support the hypothesis that trained immunity is one of the main immunological processes that mediate the nonspecific protective effects against infections induced by vaccines, such as bacillus Calmette-Guérin or measles vaccination.
Abstract: The general view that only adaptive immunity can build immunological memory has recently been challenged. In organisms lacking adaptive immunity, as well as in mammals, the innate immune system can mount resistance to reinfection, a phenomenon termed "trained immunity" or "innate immune memory." Trained immunity is orchestrated by epigenetic reprogramming, broadly defined as sustained changes in gene expression and cell physiology that do not involve permanent genetic changes such as mutations and recombination, which are essential for adaptive immunity. The discovery of trained immunity may open the door for novel vaccine approaches, new therapeutic strategies for the treatment of immune deficiency states, and modulation of exaggerated inflammation in autoinflammatory diseases.

1,690 citations


Journal ArticleDOI
17 Nov 2016-Cell
TL;DR: Dietary fiber deprivation, together with a fiber-deprived, mucus-eroding microbiota, promotes greater epithelial access and lethal colitis by the mucosal pathogen, Citrobacter rodentium.

1,689 citations


Journal ArticleDOI
TL;DR: A systematic treatment of non-orthogonal multiple access, from its combination with MIMO technologies to cooperative NOMA, as well as the interplay between N OMA and cognitive radio is provided.
Abstract: As the latest member of the multiple access family, non-orthogonal multiple access (NOMA) has been recently proposed for 3GPP LTE and is envisioned to be an essential component of 5G mobile networks. The key feature of NOMA is to serve multiple users at the same time/frequency/ code, but with different power levels, which yields a significant spectral efficiency gain over conventional orthogonal MA. The article provides a systematic treatment of this newly emerging technology, from its combination with MIMO technologies to cooperative NOMA, as well as the interplay between NOMA and cognitive radio. This article also reviews the state of the art in the standardization activities concerning the implementation of NOMA in LTE and 5G networks.

1,687 citations


Journal ArticleDOI
TL;DR: RAAS Inhibitors in Patients with Covid-19 show low levels of renin–angiotensin-converting enzyme 2 levels and activity in humans, but the effects are still uncertain.
Abstract: RAAS Inhibitors in Patients with Covid-19 The effects of renin–angiotensin–aldosterone system blockers on angiotensin-converting enzyme 2 levels and activity in humans are uncertain. The authors hy...

1,687 citations


Journal ArticleDOI
03 Mar 2017-Science
TL;DR: The ability to design COFs and to adjust their pore metrics using the principles of reticular synthesis has given rise to frameworks with ultralow densities, which has resulted in the first implementation of the concept of molecular weaving.
Abstract: Just over a century ago, Lewis published his seminal work on what became known as the covalent bond, which has since occupied a central role in the theory of making organic molecules. With the advent of covalent organic frameworks (COFs), the chemistry of the covalent bond was extended to two- and three-dimensional frameworks. Here, organic molecules are linked by covalent bonds to yield crystalline, porous COFs from light elements (boron, carbon, nitrogen, oxygen, and silicon) that are characterized by high architectural and chemical robustness. This discovery paved the way for carrying out chemistry on frameworks without losing their porosity or crystallinity, and in turn achieving designed properties in materials. The recent union of the covalent and the mechanical bond in the COF provides the opportunity for making woven structures that incorporate flexibility and dynamics into frameworks.

1,687 citations


Journal ArticleDOI
TL;DR: Among patients with platinum-sensitive, recurrent ovarian cancer, the median duration of progression-free survival was significantly longer amongThose receiving niraparib than among those receiving placebo, regardless of the presence or absence of gBRCA mutations or HRD status, with moderate bone marrow toxicity.
Abstract: Tesaro; Amgen; Genentech; Roche; AstraZeneca; Myriad Genetics; Merck; Gradalis; Cerulean; Vermillion; ImmunoGen; Pfizer; Bayer; Nu-Cana BioMed; INSYS Therapeutics; GlaxoSmithKline; Verastem; Mateon Therapeutics; Pharmaceutical Product Development; Clovis Oncology; Janssen/Johnson Johnson; Eli Lilly; Merck Sharp Dohme

Journal ArticleDOI
TL;DR: The risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced is explored.
Abstract: We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a "Hothouse Earth" pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be. If the threshold is crossed, the resulting trajectory would likely cause serious disruptions to ecosystems, society, and economies. Collective human action is required to steer the Earth System away from a potential threshold and stabilize it in a habitable interglacial-like state. Such action entails stewardship of the entire Earth System-biosphere, climate, and societies-and could include decarbonization of the global economy, enhancement of biosphere carbon sinks, behavioral changes, technological innovations, new governance arrangements, and transformed social values.

Journal ArticleDOI
TL;DR: In dye-sensitized solar cells co-photosensitized with an alkoxysilyl-anchor dye ADEKA-1 and a carboxy-anchors organic dye LEG4, LEG4 was revealed to work collaboratively by enhancing the electron injection from the light-excited dyes to the TiO2 electrodes.

Proceedings Article
19 Jun 2016
TL;DR: In this article, an autoencoder that leverages learned representations to better measure similarities in data space is presented, which can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective.
Abstract: We present an autoencoder that leverages learned representations to better measure similarities in data space. By combining a variational autoencoder (VAE) with a generative adversarial network (GAN) we can use learned feature representations in the GAN discriminator as basis for the VAE reconstruction objective. Thereby, we replace element-wise errors with feature-wise errors to better capture the data distribution while offering invariance towards e.g. translation. We apply our method to images of faces and show that it outperforms VAEs with element-wise similarity measures in terms of visual fidelity. Moreover, we show that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.

Journal ArticleDOI
TL;DR: Several aspects of disease response assessment are clarified, along with endpoints for clinical trials, and future directions for disease response assessments are highlighted, to allow uniform reporting within and outside clinical trials.
Abstract: Treatment of multiple myeloma has substantially changed over the past decade with the introduction of several classes of new effective drugs that have greatly improved the rates and depth of response. Response criteria in multiple myeloma were developed to use serum and urine assessment of monoclonal proteins and bone marrow assessment (which is relatively insensitive). Given the high rates of complete response seen in patients with multiple myeloma with new treatment approaches, new response categories need to be defined that can identify responses that are deeper than those conventionally defined as complete response. Recent attempts have focused on the identification of residual tumour cells in the bone marrow using flow cytometry or gene sequencing. Furthermore, sensitive imaging techniques can be used to detect the presence of residual disease outside of the bone marrow. Combining these new methods, the International Myeloma Working Group has defined new response categories of minimal residual disease negativity, with or without imaging-based absence of extramedullary disease, to allow uniform reporting within and outside clinical trials. In this Review, we clarify several aspects of disease response assessment, along with endpoints for clinical trials, and highlight future directions for disease response assessments.

Journal ArticleDOI
TL;DR: Intratumor heterogeneity mediated through chromosome instability was associated with an increased risk of recurrence or death, a finding that supports the potential value of chromosome instability as a prognostic predictor.
Abstract: BackgroundAmong patients with non–small-cell lung cancer (NSCLC), data on intratumor heterogeneity and cancer genome evolution have been limited to small retrospective cohorts. We wanted to prospectively investigate intratumor heterogeneity in relation to clinical outcome and to determine the clonal nature of driver events and evolutionary processes in early-stage NSCLC. MethodsIn this prospective cohort study, we performed multiregion whole-exome sequencing on 100 early-stage NSCLC tumors that had been resected before systemic therapy. We sequenced and analyzed 327 tumor regions to define evolutionary histories, obtain a census of clonal and subclonal events, and assess the relationship between intratumor heterogeneity and recurrence-free survival. ResultsWe observed widespread intratumor heterogeneity for both somatic copy-number alterations and mutations. Driver mutations in EGFR, MET, BRAF, and TP53 were almost always clonal. However, heterogeneous driver alterations that occurred later in evolution w...

Book ChapterDOI
04 Oct 2019
TL;DR: A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented that has applications in cryptography, random constructions, and complexity theory.
Abstract: A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented. This generator is a deterministic polynomial-time algorithm that transforms pairs (g, r), where g is any one-way function and r is a random k-bit string, to polynomial-time computable functionsf,: { 1, . . . , 2') + { 1, . . . , 2kl. Thesef,'s cannot be distinguished from random functions by any probabilistic polynomial-time algorithm that asks and receives the value of a function at arguments of its choice. The result has applications in cryptography, random constructions, and complexity theory. Categories and Subject Descriptors: F.0 (Theory of Computation): General; F. 1.1 (Computation by Abstract Devices): Models of Computation-computability theory; G.0 (Mathematics of Computing): General; G.3 (Mathematics of Computing): Probability and Statistics-probabilistic algorithms; random number generation

Proceedings ArticleDOI
01 Oct 2016
TL;DR: A fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps is proposed and a novel way to efficiently learn feature map up-sampling within the network is presented.
Abstract: This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available.


Proceedings ArticleDOI
02 Feb 2016
TL;DR: In this article, a pragmatic approach to multiple object tracking where the main focus is to associate objects efficiently for online and real-time applications is explored, where changing the detector can improve tracking by up to 18.9%.
Abstract: This paper explores a pragmatic approach to multiple object tracking where the main focus is to associate objects efficiently for online and realtime applications. To this end, detection quality is identified as a key factor influencing tracking performance, where changing the detector can improve tracking by up to 18.9%. Despite only using a rudimentary combination of familiar techniques such as the Kalman Filter and Hungarian algorithm for the tracking components, this approach achieves an accuracy comparable to state-of-the-art online trackers. Furthermore, due to the simplicity of our tracking method, the tracker updates at a rate of 260 Hz which is over 20x faster than other state-of-the-art trackers.

Journal ArticleDOI
TL;DR: This study shows how to design and train convolutional neural networks to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping.
Abstract: Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: Comparative genomic analysis indicates vertebrate expansions of genes associated with neuronal function, with tissue-specific developmental regulation, and with the hemostasis and immune systems are indicated.
Abstract: A 2.91-billion base pair (bp) consensus sequence of the euchromatic portion of the human genome was generated by the whole-genome shotgun sequencing method. The 14.8-billion bp DNA sequence was generated over 9 months from 27,271,853 high-quality sequence reads (5.11-fold coverage of the genome) from both ends of plasmid clones made from the DNA of five individuals. Two assembly strategies—a whole-genome assembly and a regional chromosome assembly—were used, each combining sequence data from Celera and the publicly funded genome effort. The public data were shredded into 550-bp segments to create a 2.9-fold coverage of those genome regions that had been sequenced, without including biases inherent in the cloning and assembly procedure used by the publicly funded group. This brought the effective coverage in the assemblies to eightfold, reducing the number and size of gaps in the final assembly over what would be obtained with 5.11-fold coverage. The two assembly strategies yielded very similar results that largely agree with independent mapping data. The assemblies effectively cover the euchromatic regions of the human chromosomes. More than 90% of the genome is in scaffold assemblies of 100,000 bp or more, and 25% of the genome is in scaffolds of 10 million bp or larger. Analysis of the genome sequence revealed 26,588 protein-encoding transcripts for which there was strong corroborating evidence and an additional ∼12,000 computationally derived genes with mouse matches or other weak supporting evidence. Although gene-dense clusters are obvious, almost half the genes are dispersed in low G+C sequence separated by large tracts of apparently noncoding sequence. Only 1.1% of the genome is spanned by exons, whereas 24% is in introns, with 75% of the genome being intergenic DNA. Duplications of segmental blocks, ranging in size up to chromosomal lengths, are abundant throughout the genome and reveal a complex evolutionary history. Comparative genomic analysis indicates vertebrate expansions of genes associated with neuronal function, with tissue-specific developmental regulation, and with the hemostasis and immune systems. DNA sequence comparisons between the consensus sequence and publicly funded genome data provided locations of 2.1 million single-nucleotide polymorphisms (SNPs). A random pair of human haploid genomes differed at a rate of 1 bp per 1250 on average, but there was marked heterogeneity in the level of polymorphism across the genome. Less than 1% of all SNPs resulted in variation in proteins, but the task of determining which SNPs have functional consequences remains an open challenge.

Journal ArticleDOI
TL;DR: Dynamic changes observed during microbiome acquisition, as well as steady-state compositions of spatial compartments, support a multistep model for root microbiome assembly from soil wherein the rhizoplane plays a selective gating role.
Abstract: Plants depend upon beneficial interactions between roots and microbes for nutrient availability, growth promotion, and disease suppression. High-throughput sequencing approaches have provided recent insights into root microbiomes, but our current understanding is still limited relative to animal microbiomes. Here we present a detailed characterization of the root-associated microbiomes of the crop plant rice by deep sequencing, using plants grown under controlled conditions as well as field cultivation at multiple sites. The spatial resolution of the study distinguished three root-associated compartments, the endosphere (root interior), rhizoplane (root surface), and rhizosphere (soil close to the root surface), each of which was found to harbor a distinct microbiome. Under controlled greenhouse conditions, microbiome composition varied with soil source and genotype. In field conditions, geographical location and cultivation practice, namely organic vs. conventional, were factors contributing to microbiome variation. Rice cultivation is a major source of global methane emissions, and methanogenic archaea could be detected in all spatial compartments of field-grown rice. The depth and scale of this study were used to build coabundance networks that revealed potential microbial consortia, some of which were involved in methane cycling. Dynamic changes observed during microbiome acquisition, as well as steady-state compositions of spatial compartments, support a multistep model for root microbiome assembly from soil wherein the rhizoplane plays a selective gating role. Similarities in the distribution of phyla in the root microbiomes of rice and other plants suggest that conclusions derived from this study might be generally applicable to land plants.

Journal ArticleDOI
TL;DR: Patisiran improved multiple clinical manifestations of hereditary transthyretin amyloidosis with polyneuropathy and showed an effect on gait speed and modified BMI.
Abstract: BACKGROUND: Patisiran, an investigational RNA interference therapeutic agent, specifically inhibits hepatic synthesis of transthyretin.METHODS: In this phase 3 trial, we randomly assigned patients ...

Journal ArticleDOI
01 Jun 2020
TL;DR: The results suggest that Deep Learning with X-ray imaging may extract significant biomarkers related to the Covid-19 disease, while the best accuracy, sensitivity, and specificity obtained is 96.78%, 98.66%, and 96.46% respectively.
Abstract: In this study, a dataset of X-ray images from patients with common bacterial pneumonia, confirmed Covid-19 disease, and normal incidents, was utilized for the automatic detection of the Coronavirus disease. The aim of the study is to evaluate the performance of state-of-the-art convolutional neural network architectures proposed over the recent years for medical image classification. Specifically, the procedure called Transfer Learning was adopted. With transfer learning, the detection of various abnormalities in small medical image datasets is an achievable target, often yielding remarkable results. The datasets utilized in this experiment are two. Firstly, a collection of 1427 X-ray images including 224 images with confirmed Covid-19 disease, 700 images with confirmed common bacterial pneumonia, and 504 images of normal conditions. Secondly, a dataset including 224 images with confirmed Covid-19 disease, 714 images with confirmed bacterial and viral pneumonia, and 504 images of normal conditions. The data was collected from the available X-ray images on public medical repositories. The results suggest that Deep Learning with X-ray imaging may extract significant biomarkers related to the Covid-19 disease, while the best accuracy, sensitivity, and specificity obtained is 96.78%, 98.66%, and 96.46% respectively. Since by now, all diagnostic tests show failure rates such as to raise concerns, the probability of incorporating X-rays into the diagnosis of the disease could be assessed by the medical community, based on the findings, while more research to evaluate the X-ray approach from different aspects may be conducted.

Journal ArticleDOI
TL;DR: Adjuvant therapy with a modified FOLFIRINOX regimen led to significantly longer survival than gemcitabine among patients with resected pancreatic cancer, at the expense of a higher incidence of toxic effects.
Abstract: BACKGROUND: Among patients with metastatic pancreatic cancer, combination chemotherapy with fluorouracil, leucovorin, irinotecan, and oxaliplatin (FOLFIRINOX) leads to longer overall survival than gemcitabine therapy. We compared the efficacy and safety of a modified FOLFIRINOX regimen with gemcitabine as adjuvant therapy in patients with resected pancreatic cancer. METHODS: We randomly assigned 493 patients with resected pancreatic ductal adenocarcinoma to receive a modified FOLFIRINOX regimen (oxaliplatin [85 mg per square meter of body-surface area], irinotecan [180 mg per square meter, reduced to 150 mg per square meter after a protocol-specified safety analysis], leucovorin [400 mg per square meter], and fluorouracil [2400 mg per square meter] every 2 weeks) or gemcitabine (1000 mg per square meter on days 1, 8, and 15 every 4 weeks) for 24 weeks. The primary end point was disease-free survival. Secondary end points included overall survival and safety. RESULTS: At a median follow-up of 33.6 months, the median disease-free survival was 21.6 months in the modified-FOLFIRINOX group and 12.8 months in the gemcitabine group (stratified hazard ratio for cancer-related event, second cancer, or death, 0.58; 95% confidence interval [CI], 0.46 to 0.73; P<0.001). The disease-free survival rate at 3 years was 39.7% in the modified-FOLFIRINOX group and 21.4% in the gemcitabine group. The median overall survival was 54.4 months in the modified-FOLFIRINOX group and 35.0 months in the gemcitabine group (stratified hazard ratio for death, 0.64; 95% CI, 0.48 to 0.86; P=0.003). The overall survival rate at 3 years was 63.4% in the modified-FOLFIRINOX group and 48.6% in the gemcitabine group. Adverse events of grade 3 or 4 occurred in 75.9% of the patients in the modified-FOLFIRINOX group and in 52.9% of those in the gemcitabine group. One patient in the gemcitabine group died from toxic effects (interstitial pneumonitis). CONCLUSIONS: Adjuvant therapy with a modified FOLFIRINOX regimen led to significantly longer survival than gemcitabine among patients with resected pancreatic cancer, at the expense of a higher incidence of toxic effects. (Funded by RD ClinicalTrials.gov number, NCT01526135 ; EudraCT number, 2011-002026-52 .).

Journal ArticleDOI
TL;DR: The 2019 novel coronavirus (2019-nCoV) outbreak is a major challenge for clinicians as little data is available that describe the disease pathogenesis, and no pharmacological therapies of proven efficacy yet exist, so understanding the evidence for harm or benefit from corticosteroids in 2019-n coV is of immediate clinical importance.

Journal ArticleDOI
TL;DR: As compared with crizotinib, alectinib showed superior efficacy and lower toxicity in primary treatment of ALK‐positive NSCLC and independent review committee–assessed progression‐free survival were consistent with those for the primary end point.
Abstract: BackgroundAlectinib, a highly selective inhibitor of anaplastic lymphoma kinase (ALK), has shown systemic and central nervous system (CNS) efficacy in the treatment of ALK-positive non–small-cell lung cancer (NSCLC). We investigated alectinib as compared with crizotinib in patients with previously untreated, advanced ALK-positive NSCLC, including those with asymptomatic CNS disease. MethodsIn a randomized, open-label, phase 3 trial, we randomly assigned 303 patients with previously untreated, advanced ALK-positive NSCLC to receive either alectinib (600 mg twice daily) or crizotinib (250 mg twice daily). The primary end point was investigator-assessed progression-free survival. Secondary end points were independent review committee–assessed progression-free survival, time to CNS progression, objective response rate, and overall survival. ResultsDuring a median follow-up of 17.6 months (crizotinib) and 18.6 months (alectinib), an event of disease progression or death occurred in 62 of 152 patients (41%) in ...

Journal ArticleDOI
TL;DR: These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and confirm that simple depth two neural networks already have perfect finite sample expressivity.
Abstract: Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small gap between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models. We supplement this republication with a new section at the end summarizing recent progresses in the field since the original version of this paper.

Posted Content
TL;DR: The Visual Genome dataset is presented, which contains over 108K images where each image has an average of $$35$$35 objects, $$26$$26 attributes, and $$21$$21 pairwise relationships between objects, and represents the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answer pairs.
Abstract: Despite progress in perceptual tasks such as image classification, computers still perform poorly on cognitive tasks such as image description and question answering. Cognition is core to tasks that involve not just recognizing, but reasoning about our visual world. However, models used to tackle the rich content in images for cognitive tasks are still being trained using the same datasets designed for perceptual tasks. To achieve success at cognitive tasks, models need to understand the interactions and relationships between objects in an image. When asked "What vehicle is the person riding?", computers will need to identify the objects in an image as well as the relationships riding(man, carriage) and pulling(horse, carriage) in order to answer correctly that "the person is riding a horse-drawn carriage". In this paper, we present the Visual Genome dataset to enable the modeling of such relationships. We collect dense annotations of objects, attributes, and relationships within each image to learn these models. Specifically, our dataset contains over 100K images where each image has an average of 21 objects, 18 attributes, and 18 pairwise relationships between objects. We canonicalize the objects, attributes, relationships, and noun phrases in region descriptions and questions answer pairs to WordNet synsets. Together, these annotations represent the densest and largest dataset of image descriptions, objects, attributes, relationships, and question answers.