scispace - formally typeset
Search or ask a question
Browse all papers

Proceedings ArticleDOI
21 Jul 2017
TL;DR: RefineNet is presented, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections and introduces chained residual pooling, which captures rich background context in an efficient manner.
Abstract: Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense classification problems such as semantic segmentation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet, a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments and set new state-of-the-art results on seven public datasets. In particular, we achieve an intersection-over-union score of 83.4 on the challenging PASCAL VOC 2012 dataset, which is the best reported result to date.

2,260 citations


Journal ArticleDOI
John Allison1, K. Amako2, John Apostolakis3, Pedro Arce4, Makoto Asai5, Tsukasa Aso6, Enrico Bagli, Alexander Bagulya7, Sw. Banerjee8, G. Barrand9, B. R. Beck10, Alexey Bogdanov11, D. Brandt, Jeremy M. C. Brown12, Helmut Burkhardt3, Ph Canal8, D. Cano-Ott4, Stephane Chauvie, Kyung-Suk Cho13, G.A.P. Cirrone14, Gene Cooperman15, M. A. Cortés-Giraldo16, G. Cosmo3, Giacomo Cuttone14, G.O. Depaola17, Laurent Desorgher, X. Dong15, Andrea Dotti5, Victor Daniel Elvira8, Gunter Folger3, Ziad Francis18, A. Galoyan19, L. Garnier9, M. Gayer3, K. Genser8, Vladimir Grichine3, Vladimir Grichine7, Susanna Guatelli20, Susanna Guatelli21, Paul Gueye22, P. Gumplinger23, Alexander Howard24, Ivana Hřivnáčová9, S. Hwang13, Sebastien Incerti25, Sebastien Incerti26, A. Ivanchenko3, Vladimir Ivanchenko3, F.W. Jones23, S. Y. Jun8, Pekka Kaitaniemi27, Nicolas A. Karakatsanis28, Nicolas A. Karakatsanis29, M. Karamitrosi30, M.H. Kelsey5, Akinori Kimura31, Tatsumi Koi5, Hisaya Kurashige32, A. Lechner3, S. B. Lee33, Francesco Longo34, M. Maire, Davide Mancusi, A. Mantero, E. Mendoza4, B. Morgan35, K. Murakami2, T. Nikitina3, Luciano Pandola14, P. Paprocki3, J Perl5, Ivan Petrović36, Maria Grazia Pia, W. Pokorski3, J. M. Quesada16, M. Raine, Maria A.M. Reis37, Alberto Ribon3, A. Ristic Fira36, Francesco Romano14, Giorgio Ivan Russo14, Giovanni Santin38, Takashi Sasaki2, D. Sawkey39, J. I. Shin33, Igor Strakovsky40, A. Taborda37, Satoshi Tanaka41, B. Tome, Toshiyuki Toshito, H.N. Tran42, Pete Truscott, L. Urbán, V. V. Uzhinsky19, Jerome Verbeke10, M. Verderi43, B. Wendt44, H. Wenzel8, D. H. Wright5, Douglas Wright10, T. Yamashita, J. Yarba8, H. Yoshida45 
TL;DR: Geant4 as discussed by the authors is a software toolkit for the simulation of the passage of particles through matter, which is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection.
Abstract: Geant4 is a software toolkit for the simulation of the passage of particles through matter. It is used by a large number of experiments and projects in a variety of application domains, including high energy physics, astrophysics and space science, medical physics and radiation protection. Over the past several years, major changes have been made to the toolkit in order to accommodate the needs of these user communities, and to efficiently exploit the growth of computing power made available by advances in technology. The adaptation of Geant4 to multithreading, advances in physics, detector modeling and visualization, extensions to the toolkit, including biasing and reverse Monte Carlo, and tools for physics and release validation are discussed here.

2,260 citations


Journal ArticleDOI
TL;DR: Nivolumab led to a greater proportion of patients achieving an objective response and fewer toxic effects than with alternative available chemotherapy regimens for patients with advanced melanoma that has progressed after ipilimumab or ipilicumab and a BRAF inhibitor.
Abstract: Summary Background Nivolumab, a fully human IgG4 PD-1 immune checkpoint inhibitor antibody, can result in durable responses in patients with melanoma who have progressed after ipilimumab and BRAF inhibitors. We assessed the efficacy and safety of nivolumab compared with investigator's choice of chemotherapy (ICC) as a second-line or later-line treatment in patients with advanced melanoma. Methods In this randomised, controlled, open-label, phase 3 trial, we recruited patients at 90 sites in 14 countries. Eligible patients were 18 years or older, had unresectable or metastatic melanoma, and progressed after ipilimumab, or ipilimumab and a BRAF inhibitor if they were BRAF V 600 mutation-positive. Participating investigators randomly assigned (with an interactive voice response system) patients 2:1 to receive an intravenous infusion of nivolumab 3 mg/kg every 2 weeks or ICC (dacarbazine 1000 mg/m 2 every 3 weeks or paclitaxel 175 mg/m 2 combined with carboplatin area under the curve 6 every 3 weeks) until progression or unacceptable toxic effects. We stratified randomisation by BRAF mutation status, tumour expression of PD-L1, and previous best overall response to ipilimumab. We used permuted blocks (block size of six) within each stratum. Primary endpoints were the proportion of patients who had an objective response and overall survival. Treatment was given open-label, but those doing tumour assessments were masked to treatment assignment. We assessed objective responses per-protocol after 120 patients had been treated with nivolumab and had a minimum follow-up of 24 weeks, and safety in all patients who had had at least one dose of treatment. The trial is closed and this is the first interim analysis, reporting the objective response primary endpoint. This study is registered with ClinicalTrials.gov, number NCT01721746. Findings Between Dec 21, 2012, and Jan 10, 2014, we screened 631 patients, randomly allocating 272 patients to nivolumab and 133 to ICC. Confirmed objective responses were reported in 38 (31·7%, 95% CI 23·5–40·8) of the first 120 patients in the nivolumab group versus five (10·6%, 3·5–23·1) of 47 patients in the ICC group. Grade 3–4 adverse events related to nivolumab included increased lipase (three [1%] of 268 patients), increased alanine aminotransferase, anaemia, and fatigue (two [1%] each); for ICC, these included neutropenia (14 [14%] of 102), thrombocytopenia (six [6%]), and anaemia (five [5%]). We noted grade 3–4 drug-related serious adverse events in 12 (5%) nivolumab-treated patients and nine (9%) patients in the ICC group. No treatment-related deaths occurred. Interpretation Nivolumab led to a greater proportion of patients achieving an objective response and fewer toxic effects than with alternative available chemotherapy regimens for patients with advanced melanoma that has progressed after ipilimumab or ipilimumab and a BRAF inhibitor. Nivolumab represents a new treatment option with clinically meaningful durable objective responses in a population of high unmet need. Funding Bristol-Myers Squibb.

2,260 citations


Journal ArticleDOI
TL;DR: Monocle 2, an algorithm that uses reversed graph embedding to describe multiple fate decisions in a fully unsupervised manner, is applied to two studies of blood development and found that mutations in the genes encoding key lineage transcription factors divert cells to alternative fates.
Abstract: Single-cell trajectories can unveil how gene regulation governs cell fate decisions. However, learning the structure of complex trajectories with multiple branches remains a challenging computational problem. We present Monocle 2, an algorithm that uses reversed graph embedding to describe multiple fate decisions in a fully unsupervised manner. We applied Monocle 2 to two studies of blood development and found that mutations in the genes encoding key lineage transcription factors divert cells to alternative fates.

2,257 citations


Journal ArticleDOI
TL;DR: The first Gaia data release, Gaia DR1 as mentioned in this paper, consists of the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues.
Abstract: At about 1000 days after the launch of Gaia we present the first Gaia data release, Gaia DR1, consisting of astrometry and photometry for over 1 billion sources brighter than magnitude 20.7. We summarize Gaia DR1 and provide illustrations of the scientific quality of the data, followed by a discussion of the limitations due to the preliminary nature of this release. Gaia DR1 consists of: a primary astrometric data set which contains the positions, parallaxes, and mean proper motions for about 2 million of the brightest stars in common with the Hipparcos and Tycho-2 catalogues and a secondary astrometric data set containing the positions for an additional 1.1 billion sources. The second component is the photometric data set,consisting of mean G-band magnitudes for all sources. The G-band light curves and the characteristics of ~3000 Cepheid and RR Lyrae stars, observed at high cadence around the south ecliptic pole, form the third component. For the primary astrometric data set the typical uncertainty is about 0.3 mas for the positions and parallaxes, and about 1 mas/yr for the proper motions. A systematic component of ~0.3 mas should be added to the parallax uncertainties. For the subset of ~94000 Hipparcos stars in the primary data set, the proper motions are much more precise at about 0.06 mas/yr. For the secondary astrometric data set, the typical uncertainty of the positions is ~10 mas. The median uncertainties on the mean G-band magnitudes range from the mmag level to ~0.03 mag over the magnitude range 5 to 20.7. Gaia DR1 represents a major advance in the mapping of the heavens and the availability of basic stellar data that underpin observational astrophysics. Nevertheless, the very preliminary nature of this first Gaia data release does lead to a number of important limitations to the data quality which should be carefully considered before drawing conclusions from the data.

2,256 citations


Posted Content
TL;DR: This paper presents UNet++, a new, more powerful architecture for medical image segmentation where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways, and argues that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar.
Abstract: In this paper, we present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip pathways. The re-designed skip pathways aim at reducing the semantic gap between the feature maps of the encoder and decoder sub-networks. We argue that the optimizer would deal with an easier learning task when the feature maps from the decoder and encoder networks are semantically similar. We have evaluated UNet++ in comparison with U-Net and wide U-Net architectures across multiple medical image segmentation tasks: nodule segmentation in the low-dose CT scans of chest, nuclei segmentation in the microscopy images, liver segmentation in abdominal CT scans, and polyp segmentation in colonoscopy videos. Our experiments demonstrate that UNet++ with deep supervision achieves an average IoU gain of 3.9 and 3.4 points over U-Net and wide U-Net, respectively.

2,254 citations


Journal ArticleDOI
TL;DR: Clumpak, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology by automating the postprocessing of results of model‐based population structure analyses.
Abstract: The identification of the genetic structure of populations from multilocus genotype data has become a central component of modern population-genetic data analysis. Application of model-based clustering programs often entails a number of steps, in which the user considers different modelling assumptions, compares results across different predetermined values of the number of assumed clusters (a parameter typically denoted K), examines multiple independent runs for each fixed value of K, and distinguishes among runs belonging to substantially distinct clustering solutions. Here, we present CLUMPAK (Cluster Markov Packager Across K), a method that automates the postprocessing of results of model-based population structure analyses. For analysing multiple independent runs at a single K value, CLUMPAK identifies sets of highly similar runs, separating distinct groups of runs that represent distinct modes in the space of possible solutions. This procedure, which generates a consensus solution for each distinct mode, is performed by the use of a Markov clustering algorithm that relies on a similarity matrix between replicate runs, as computed by the software CLUMPP. Next, CLUMPAK identifies an optimal alignment of inferred clusters across different values of K, extending a similar approach implemented for a fixed K in CLUMPP and simplifying the comparison of clustering results across different K values. CLUMPAK incorporates additional features, such as implementations of methods for choosing K and comparing solutions obtained by different programs, models, or data subsets. CLUMPAK, available at http://clumpak.tau.ac.il, simplifies the use of model-based analyses of population structure in population genetics and molecular ecology.

2,252 citations


Journal ArticleDOI
Haldun Akoglu1
TL;DR: Several different correlation coefficients reported in medical manuscripts are familiarize medical readers with, clarify confounding aspects and summarize the naming practices for the strength of correlation coefficients are summarized.
Abstract: When writing a manuscript, we often use words such as perfect, strong, good or weak to name the strength of the relationship between variables. However, it is unclear where a good relationship turns into a strong one. The same strength of r is named differently by several researchers. Therefore, there is an absolute necessity to explicitly report the strength and direction of r while reporting correlation coefficients in manuscripts. This article aims to familiarize medical readers with several different correlation coefficients reported in medical manuscripts, clarify confounding aspects and summarize the naming practices for the strength of correlation coefficients.

2,251 citations


Journal ArticleDOI
TL;DR: The benefit of nivolumab plus ipilimumab over chemotherapy was broadly consistent within subgroups, including patients with a PD‐L1 expression level of at least 1% and those with a level of less than 1%.
Abstract: Background Nivolumab plus ipilimumab showed promising efficacy for the treatment of non–small-cell lung cancer (NSCLC) in a phase 1 trial, and tumor mutational burden has emerged as a potential biomarker of benefit. In this part of an open-label, multipart, phase 3 trial, we examined progression-free survival with nivolumab plus ipilimumab versus chemotherapy among patients with a high tumor mutational burden (≥10 mutations per megabase). Methods We enrolled patients with stage IV or recurrent NSCLC that was not previously treated with chemotherapy. Those with a level of tumor programmed death ligand 1 (PD-L1) expression of at least 1% were randomly assigned, in a 1:1:1 ratio, to receive nivolumab plus ipilimumab, nivolumab monotherapy, or chemotherapy; those with a tumor PD-L1 expression level of less than 1% were randomly assigned, in a 1:1:1 ratio, to receive nivolumab plus ipilimumab, nivolumab plus chemotherapy, or chemotherapy. Tumor mutational burden was determined by the FoundationOne CDx...

2,249 citations


Posted Content
TL;DR: A combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions is proposed, demonstrating the broad applicability of this approach to VQA.
Abstract: Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.

2,248 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: DeepSDF as mentioned in this paper represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape.
Abstract: Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work.

Posted Content
TL;DR: The authors proposed a self-supervised loss that focuses on modeling inter-sentence coherence, and showed it consistently helps downstream tasks with multientence inputs, achieving state-of-the-art results on the GLUE, RACE, and \squad benchmarks.
Abstract: Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations and longer training times. To address these problems, we present two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT. Comprehensive empirical evidence shows that our proposed methods lead to models that scale much better compared to the original BERT. We also use a self-supervised loss that focuses on modeling inter-sentence coherence, and show it consistently helps downstream tasks with multi-sentence inputs. As a result, our best model establishes new state-of-the-art results on the GLUE, RACE, and \squad benchmarks while having fewer parameters compared to BERT-large. The code and the pretrained models are available at this https URL.

Journal ArticleDOI
TL;DR: Tests on both synthetic and real reads show Unicycler can assemble larger contigs with fewer misassemblies than other hybrid assemblers, even when long-read depth and accuracy are low.
Abstract: The Illumina DNA sequencing platform generates accurate but short reads, which can be used to produce accurate but fragmented genome assemblies. Pacific Biosciences and Oxford Nanopore Technologies DNA sequencing platforms generate long reads that can produce complete genome assemblies, but the sequencing is more expensive and error-prone. There is significant interest in combining data from these complementary sequencing technologies to generate more accurate "hybrid" assemblies. However, few tools exist that truly leverage the benefits of both types of data, namely the accuracy of short reads and the structural resolving power of long reads. Here we present Unicycler, a new tool for assembling bacterial genomes from a combination of short and long reads, which produces assemblies that are accurate, complete and cost-effective. Unicycler builds an initial assembly graph from short reads using the de novo assembler SPAdes and then simplifies the graph using information from short and long reads. Unicycler uses a novel semi-global aligner to align long reads to the assembly graph. Tests on both synthetic and real reads show Unicycler can assemble larger contigs with fewer misassemblies than other hybrid assemblers, even when long-read depth and accuracy are low. Unicycler is open source (GPLv3) and available at github.com/rrwick/Unicycler.

Journal ArticleDOI
TL;DR: The reported incidence of sepsis is increasing, likely reflecting aging populations with more comorbidities, greater recognition, and, in some countries, reimbursement-favorable coding.
Abstract: Sepsis, a syndrome of physiologic, pathologic, and biochemical abnormalities induced by infection, is a major public health concern, accounting for more than $20 billion (5.2%) of total US hospital costs in 2011.The reported incidence of sepsis is increasing, likely reflecting aging populations with more comorbidities, greater recognition, and, in some countries, reimbursement-favorable coding. Although the true incidence is unknown, conservative estimates indicate that sepsis is a leading cause of mortality and critical illness worldwide. JMS 2015;18(2):162-164

Proceedings ArticleDOI
02 Apr 2019
TL;DR: For the first time, a much simpler and flexible detection framework achieving improved detection accuracy is demonstrated, and it is hoped that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks.
Abstract: We propose a fully convolutional one-stage object detector (FCOS) to solve object detection in a per-pixel prediction fashion, analogue to semantic segmentation. Almost all state-of-the-art object detectors such as RetinaNet, SSD, YOLOv3, and Faster R-CNN rely on pre-defined anchor boxes. In contrast, our proposed detector FCOS is anchor box free, as well as proposal free. By eliminating the pre-defined set of anchor boxes, FCOS completely avoids the complicated computation related to anchor boxes such as calculating overlapping during training. More importantly, we also avoid all hyper-parameters related to anchor boxes, which are often very sensitive to the final detection performance. With the only post-processing non-maximum suppression (NMS), FCOS with ResNeXt-64x4d-101 achieves 44.7% in AP with single-model and single-scale testing, surpassing previous one-stage detectors with the advantage of being much simpler. For the first time, we demonstrate a much simpler and flexible detection framework achieving improved detection accuracy. We hope that the proposed FCOS framework can serve as a simple and strong alternative for many other instance-level tasks. Code is available at: https://tinyurl.com/FCOSv1

Journal ArticleDOI
TL;DR: In patients with severe aortic stenosis who were at low surgical risk, TAVR with a self‐expanding supraannular bioprosthesis was noninferior to surgery with respect to the composite end point of death or disabling stroke at 24 months.
Abstract: Background Transcatheter aortic-valve replacement (TAVR) is an alternative to surgery in patients with severe aortic stenosis who are at increased risk for death from surgery; less is know...

Proceedings ArticleDOI
21 Jul 2017
TL;DR: In this article, the authors propose a novel training objective that enables CNNs to learn to perform single image depth estimation, despite the absence of ground truth depth data, by generating disparity images by training their network with an image reconstruction loss.
Abstract: Learning based methods have shown very promising results for the task of depth estimation in single images. However, most existing approaches treat depth prediction as a supervised regression problem and as a result, require vast quantities of corresponding ground truth depth data for training. Just recording quality depth data in a range of environments is a challenging problem. In this paper, we innovate beyond existing approaches, replacing the use of explicit depth data during training with easier-to-obtain binocular stereo footage. We propose a novel training objective that enables our convolutional neural network to learn to perform single image depth estimation, despite the absence of ground truth depth data. Ex-ploiting epipolar geometry constraints, we generate disparity images by training our network with an image reconstruction loss. We show that solving for image reconstruction alone results in poor quality depth images. To overcome this problem, we propose a novel training loss that enforces consistency between the disparities produced relative to both the left and right images, leading to improved performance and robustness compared to existing approaches. Our method produces state of the art results for monocular depth estimation on the KITTI driving dataset, even outperforming supervised methods that have been trained with ground truth depth.

Journal ArticleDOI
TL;DR: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data from 2008 to 2014 using the original SDSS wide-field imager, the original and an upgraded multi-object fiber-fed optical spectrograph, a new near-infrared high-resolution spectrogram, and a novel optical interferometer as discussed by the authors.
Abstract: The third generation of the Sloan Digital Sky Survey (SDSS-III) took data from 2008 to 2014 using the original SDSS wide-field imager, the original and an upgraded multi-object fiber-fed optical spectrograph, a new near-infrared high-resolution spectrograph, and a novel optical interferometer. All the data from SDSS-III are now made public. In particular, this paper describes Data Release 11 (DR11) including all data acquired through 2013 July, and Data Release 12 (DR12) adding data acquired through 2014 July (including all data included in previous data releases), marking the end of SDSS-III observing. Relative to our previous public release (DR10), DR12 adds one million new spectra of galaxies and quasars from the Baryon Oscillation Spectroscopic Survey (BOSS) over an additional 3000 sq. deg of sky, more than triples the number of H-band spectra of stars as part of the Apache Point Observatory (APO) Galactic Evolution Experiment (APOGEE), and includes repeated accurate radial velocity measurements of 5500 stars from the Multi-Object APO Radial Velocity Exoplanet Large-area Survey (MARVELS). The APOGEE outputs now include measured abundances of 15 different elements for each star. In total, SDSS-III added 2350 sq. deg of ugriz imaging; 155,520 spectra of 138,099 stars as part of the Sloan Exploration of Galactic Understanding and Evolution 2 (SEGUE-2) survey; 2,497,484 BOSS spectra of 1,372,737 galaxies, 294,512 quasars, and 247,216 stars over 9376 sq. deg; 618,080 APOGEE spectra of 156,593 stars; and 197,040 MARVELS spectra of 5,513 stars. Since its first light in 1998, SDSS has imaged over 1/3 of the Celestial sphere in five bands and obtained over five million astronomical spectra.

Journal ArticleDOI
TL;DR: Interaction of VeA with at least four methyltransferase proteins indicates a molecular hub function for VeA that questions: Is there a VeA supercomplex or is VeA part of a highly dynamic cellular control network with many different partners?
Abstract: Fungal secondary metabolism has become an important research topic with great biomedical and biotechnological value. In the postgenomic era, understanding the diversity and the molecular control of secondary metabolites are two challenging tasks addressed by the research community. Discovery of the LaeA methyltransferase 10 years ago opened up a new horizon on the control of secondary metabolite research when it was found that expression of many secondary metabolite gene clusters is controlled by LaeA. While the molecular function of LaeA remains an enigma, discovery of the velvet family proteins as interaction partners further extended the role of the LaeA beyond secondary metabolism. The heterotrimeric VelB-VeA-LaeA complex plays important roles in development, sporulation, secondary metabolism and pathogenicity. Recently, three other methyltransferases have been found to associate with the velvet complex, the LaeA-like methyltransferase F (LlmF) and the methyltransferase heterodimers VipC-VapB. Interaction of VeA with at least four methyltransferase proteins indicates a molecular hub function for VeA that questions: Is there a VeA supercomplex or is VeA part of a highly dynamic cellular control network with many different partners?

Proceedings ArticleDOI
21 Jul 2017
TL;DR: The ADE20K dataset, spanning diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts, is introduced and it is shown that the trained scene parsing networks can lead to applications such as image content removal and scene synthesis.
Abstract: Scene parsing, or recognizing and segmenting objects and stuff in an image, is one of the key problems in computer vision. Despite the communitys efforts in data collection, there are still few image datasets covering a wide range of scenes and object categories with dense and detailed annotations for scene parsing. In this paper, we introduce and analyze the ADE20K dataset, spanning diverse annotations of scenes, objects, parts of objects, and in some cases even parts of parts. A scene parsing benchmark is built upon the ADE20K with 150 object and stuff classes included. Several segmentation baseline models are evaluated on the benchmark. A novel network design called Cascade Segmentation Module is proposed to parse a scene into stuff, objects, and object parts in a cascade and improve over the baselines. We further show that the trained scene parsing networks can lead to applications such as image content removal and scene synthesis1.

Proceedings ArticleDOI
01 Jun 2018
TL;DR: PWC-Net as discussed by the authors uses the current optical flow estimate to warp the CNN features of the second image, which is processed by a CNN to estimate the optical flow, and achieves state-of-the-art performance on the MPI Sintel final pass and KITTI 2015 benchmarks.
Abstract: We present a compact but effective CNN model for optical flow, called PWC-Net. PWC-Net has been designed according to simple and well-established principles: pyramidal processing, warping, and the use of a cost volume. Cast in a learnable feature pyramid, PWC-Net uses the current optical flow estimate to warp the CNN features of the second image. It then uses the warped features and features of the first image to construct a cost volume, which is processed by a CNN to estimate the optical flow. PWC-Net is 17 times smaller in size and easier to train than the recent FlowNet2 model. Moreover, it outperforms all published optical flow methods on the MPI Sintel final pass and KITTI 2015 benchmarks, running at about 35 fps on Sintel resolution (1024 A— 436) images. Our models are available on our project website.

Journal ArticleDOI
TL;DR: In this paper, the authors used the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) to reduce the uncertainty in the local value of the Hubble constant from 3.3% to 2.4%.
Abstract: We use the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) to reduce the uncertainty in the local value of the Hubble constant from 3.3% to 2.4%. The bulk of this improvement comes from new near-infrared (NIR) observations of Cepheid variables in 11 host galaxies of recent type Ia supernovae (SNe Ia), more than doubling the sample of reliable SNe Ia having a Cepheid-calibrated distance to a total of 19, these in turn leverage the magnitude-redshift relation based on ∼300 SNe Ia at z < 0.15. All 19 hosts as well as the megamaser system NGC 4258 have been observed with WFC3 in the optical and NIR, thus nullifying cross-instrument zeropoint errors in the relative distance estimates from Cepheids. Other noteworthy improvements include a 33% reduction in the systematic uncertainty in the maser distance to NGC 4258, a larger sample of Cepheids in the Large Magellanic Cloud (LMC), a more robust distance to the LMC based on late-type detached eclipsing binaries (DEBs), HST observations of Cepheids in M31, and new HST-based trigonometric parallaxes for Milky Way (MW) Cepheids. We consider four geometric distance calibrations of Cepheids: (i) megamasers in NGC 4258, (ii) 8 DEBs in the LMC, (iii) 15 MW Cepheids with parallaxes measured with HST/FGS, HST/WFC3 spatial scanning and/or Hipparcos, and (iv) 2 DEBs in M31. The Hubble constant from each is 72.25 ± 2.51, 72.04 ± 2.67, 76.18 ± 2.37, and 74.50 ± 3.27 km s(−)(1) Mpc(−)(1), respectively. Our best estimate of H (0) = 73.24 ± 1.74 km s(−)(1) Mpc(−)(1) combines the anchors NGC 4258, MW, and LMC, yielding a 2.4% determination (all quoted uncertainties include fully propagated statistical and systematic components). This value is 3.4σ higher than 66.93 ± 0.62 km s(−)(1) Mpc(−)(1) predicted by ΛCDM with 3 neutrino flavors having a mass of 0.06 eV and the new Planck data, but the discrepancy reduces to 2.1σ relative to the prediction of 69.3 ± 0.7 km s(−)(1) Mpc(−)(1) based on the comparably precise combination of WMAP+ACT+SPT+BAO observations, suggesting that systematic uncertainties in CMB radiation measurements may play a role in the tension. If we take the conflict between Planck high-redshift measurements and our local determination of H (0) at face value, one plausible explanation could involve an additional source of dark radiation in the early universe in the range of ΔN (eff) ≈ 0.4–1. We anticipate further significant improvements in H (0) from upcoming parallax measurements of long-period MW Cepheids.

Journal ArticleDOI
16 Feb 2017-PLOS ONE
TL;DR: Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%.
Abstract: This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods-random forest and gradient boosting and/or multinomial logistic regression-as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10-fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License.


Journal ArticleDOI
TL;DR: In some religious traditions, the myth of the ‘Fall from the Garden of Eden’ symbolizes the loss of the primordial state through the veiling of higher consciousness.
Abstract: Human beings are described by many spiritual traditions as ‘blind’ or ‘asleep’ or ‘in a dream.’ These terms refers to the limited attenuated state of consciousness of most human beings caught up in patterns of conditioned thought, feeling and perception, which prevent the development of our latent, higher spiritual possibilities. In the words of Idries Shah: “Man, like a sleepwalker who suddenly ‘comes to’ on some lonely road has in general no correct idea as to his origins or his destiny.” In some religious traditions, such as Christianity and Islam, the myth of the ‘Fall from the Garden of Eden’ symbolizes the loss of the primordial state through the veiling of higher consciousness. Other traditions use similar metaphors to describe the spiritual condition of humanity:

Journal ArticleDOI
07 May 2015-Nature
TL;DR: The experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification).
Abstract: Despite much progress in semiconductor integrated circuit technology, the extreme complexity of the human cerebral cortex, with its approximately 10(14) synapses, makes the hardware implementation of neuromorphic networks with a comparable number of devices exceptionally challenging. To provide comparable complexity while operating much faster and with manageable power dissipation, networks based on circuits combining complementary metal-oxide-semiconductors (CMOSs) and adjustable two-terminal resistive devices (memristors) have been developed. In such circuits, the usual CMOS stack is augmented with one or several crossbar layers, with memristors at each crosspoint. There have recently been notable improvements in the fabrication of such memristive crossbars and their integration with CMOS circuits, including first demonstrations of their vertical integration. Separately, discrete memristors have been used as artificial synapses in neuromorphic networks. Very recently, such experiments have been extended to crossbar arrays of phase-change memristive devices. The adjustment of such devices, however, requires an additional transistor at each crosspoint, and hence these devices are much harder to scale than metal-oxide memristors, whose nonlinear current-voltage curves enable transistor-free operation. Here we report the experimental implementation of transistor-free metal-oxide memristor crossbars, with device variability sufficiently low to allow operation of integrated neural networks, in a simple network: a single-layer perceptron (an algorithm for linear classification). The network can be taught in situ using a coarse-grain variety of the delta rule algorithm to perform the perfect classification of 3 × 3-pixel black/white images into three classes (representing letters). This demonstration is an important step towards much larger and more complex memristive neuromorphic networks.

Journal ArticleDOI
TL;DR: The current understanding of the pathogenesis, epidemiology, management and outcomes of patients with COVID-19 who develop venous or arterial thrombosis, and of those with preexistingThrombotic disease who develop CO VID-19 are reviewed.

Journal ArticleDOI
TL;DR: This study provides the first comprehensive assessment of the global burden of AMR, as well as an evaluation of the availability of data, and estimates aggregated to the global and regional level.

Journal ArticleDOI
01 Jan 2017-Gut
TL;DR: This fifth edition of the Maastricht Consensus Report describes how experts from 24 countries examined new data related to H. pylori infection in the various clinical scenarios and provided recommendations on the basis of the best available evidence and relevance.
Abstract: Important progress has been made in the management of Helicobacter pylori infection and in this fifth edition of the Maastricht Consensus Report, key aspects related to the clinical role of H. pylori were re-evaluated in 2015. In the Maastricht V/Florence Consensus Conference, 43 experts from 24 countries examined new data related to H. pylori in five subdivided workshops: (1) Indications/Associations, (2) Diagnosis, (3) Treatment, (4) Prevention/Public Health, (5) H. pylori and the Gastric Microbiota. The results of the individual workshops were presented to a final consensus voting that included all participants. Recommendations are provided on the basis of the best available evidence and relevance to the management of H. pylori infection in the various clinical scenarios.

Journal ArticleDOI
29 Mar 2021-BMJ
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA 2020) as mentioned in this paper was developed to facilitate transparent and complete reporting of systematic reviews, and has been updated to reflect recent advances in systematic review methodology and terminology.
Abstract: The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was developed to facilitate transparent and complete reporting of systematic reviews and has been updated (to PRISMA 2020) to reflect recent advances in systematic review methodology and terminology. Here, we present the explanation and elaboration paper for PRISMA 2020, where we explain why reporting of each item is recommended, present bullet points that detail the reporting recommendations, and present examples from published reviews. We hope that changes to the content and structure of PRISMA 2020 will facilitate uptake of the guideline and lead to more transparent, complete, and accurate reporting of systematic reviews.