scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: Patients with fever and/or cough and with conspicuous ground-glass opacity lesions in the peripheral and posterior lungs on CT images, combined with normal or decreased white blood cells and a history of epidemic exposure, are highly suspected of having 2019 Novel Coronavirus (2019-nCoV) pneumonia.
Abstract: BackgroundThe chest CT findings of patients with 2019 Novel Coronavirus (2019-nCoV) pneumonia have not previously been described in detail.PurposeTo investigate the clinical, laboratory, and imaging findings of emerging 2019-nCoV pneumonia in humans.Materials and MethodsFifty-one patients (25 men and 26 women; age range 16-76 years) with laboratory-confirmed 2019-nCoV infection by using real-time reverse transcription polymerase chain reaction underwent thin-section CT. The imaging findings, clinical data, and laboratory data were evaluated.ResultsFifty of 51 patients (98%) had a history of contact with individuals from the endemic center in Wuhan, China. Fever (49 of 51, 96%) and cough (24 of 51, 47%) were the most common symptoms. Most patients had a normal white blood cell count (37 of 51, 73%), neutrophil count (44 of 51, 86%), and either normal (17 of 51, 35%) or reduced (33 of 51, 65%) lymphocyte count. CT images showed pure ground-glass opacity (GGO) in 39 of 51 (77%) patients and GGO with reticular and/or interlobular septal thickening in 38 of 51 (75%) patients. GGO with consolidation was present in 30 of 51 (59%) patients, and pure consolidation was present in 28 of 51 (55%) patients. Forty-four of 51 (86%) patients had bilateral lung involvement, while 41 of 51 (80%) involved the posterior part of the lungs and 44 of 51 (86%) were peripheral. There were more consolidated lung lesions in patients 5 days or more from disease onset to CT scan versus 4 days or fewer (431 of 712 lesions vs 129 of 612 lesions; P < .001). Patients older than 50 years had more consolidated lung lesions than did those aged 50 years or younger (212 of 470 vs 198 of 854; P < .001). Follow-up CT in 13 patients showed improvement in seven (54%) patients and progression in four (31%) patients.ConclusionPatients with fever and/or cough and with conspicuous ground-glass opacity lesions in the peripheral and posterior lungs on CT images, combined with normal or decreased white blood cells and a history of epidemic exposure, are highly suspected of having 2019 Novel Coronavirus (2019-nCoV) pneumonia.© RSNA, 2020.

1,004 citations



Journal ArticleDOI
TL;DR: In this article, the authors reported the observation of a compact binary coalescence involving a 22.2 -24.3 magnitude black hole and a compact object with a mass of 2.50 -2.67 magnitude.
Abstract: We report the observation of a compact binary coalescence involving a 22.2 - 24.3 $M_{\odot}$ black hole and a compact object with a mass of 2.50 - 2.67 $M_{\odot}$ (all measurements quoted at the 90$\%$ credible level). The gravitational-wave signal, GW190814, was observed during LIGO's and Virgo's third observing run on August 14, 2019 at 21:10:39 UTC and has a signal-to-noise ratio of 25 in the three-detector network. The source was localized to 18.5 deg$^2$ at a distance of $241^{+41}_{-45}$ Mpc; no electromagnetic counterpart has been confirmed to date. The source has the most unequal mass ratio yet measured with gravitational waves, $0.112^{+0.008}_{-0.009}$, and its secondary component is either the lightest black hole or the heaviest neutron star ever discovered in a double compact-object system. The dimensionless spin of the primary black hole is tightly constrained to $\leq 0.07$. Tests of general relativity reveal no measurable deviations from the theory, and its prediction of higher-multipole emission is confirmed at high confidence. We estimate a merger rate density of 1-23 Gpc$^{-3}$ yr$^{-1}$ for the new class of binary coalescence sources that GW190814 represents. Astrophysical models predict that binaries with mass ratios similar to this event can form through several channels, but are unlikely to have formed in globular clusters. However, the combination of mass ratio, component masses, and the inferred merger rate for this event challenges all current models for the formation and mass distribution of compact-object binaries.

1,004 citations


Journal ArticleDOI
TL;DR: The Zika virus (ZIKV) is a mosquito-borne flavivirus related to yellow fever virus, dengue virus (DENV), and West Nile virus (WNV) and is transmitted by many Aedes spp.
Abstract: To the Editor: Zika virus (ZIKV) is a mosquito-borne flavivirus related to yellow fever virus, dengue virus (DENV), and West Nile virus (WNV). It is a single-stranded positive RNA virus (10,794-nt genome) that is closely related to the Spondweni virus and is transmitted by many Aedes spp. mosquitoes, including Ae. africanus, Ae. luteocephalus, Ae. hensilli, and Ae. aegypti. The virus was identified in rhesus monkeys during sylvatic yellow fever surveillance in the Zika Forest in Uganda in 1947 and was reported in humans in 1952 (1).

1,003 citations


Journal ArticleDOI
TL;DR: Strategies to address the challenges for materials development in this area, such as the adoption of smart architectures, innovative device configuration design, co-catalyst loading, and surface protection layer deposition, are outlined throughout the text, to deliver a highly efficient and stable PEC device for water splitting.
Abstract: It is widely accepted within the community that to achieve a sustainable society with an energy mix primarily based on solar energy we need an efficient strategy to convert and store sunlight into chemical fuels. A photoelectrochemical (PEC) device would therefore play a key role in offering the possibility of carbon-neutral solar fuel production through artificial photosynthesis. The past five years have seen a surge in the development of promising semiconductor materials. In addition, low-cost earth-abundant co-catalysts are ubiquitous in their employment in water splitting cells due to the sluggish kinetics of the oxygen evolution reaction (OER). This review commences with a fundamental understanding of semiconductor properties and charge transfer processes in a PEC device. We then describe various configurations of PEC devices, including single light-absorber cells and multi light-absorber devices (PEC, PV-PEC and PV/electrolyser tandem cell). Recent progress on both photoelectrode materials (light absorbers) and electrocatalysts is summarized, and important factors which dominate photoelectrode performance, including light absorption, charge separation and transport, surface chemical reaction rate and the stability of the photoanode, are discussed. Controlling semiconductor properties is the primary concern in developing materials for solar water splitting. Accordingly, strategies to address the challenges for materials development in this area, such as the adoption of smart architectures, innovative device configuration design, co-catalyst loading, and surface protection layer deposition, are outlined throughout the text, to deliver a highly efficient and stable PEC device for water splitting.

1,003 citations


Journal ArticleDOI
15 May 2015-Science
TL;DR: The results demonstrate that vaccination directed at tumor-encoded amino acid substitutions broadens the antigenic breadth and clonal diversity of antitumor immunity.
Abstract: T cell immunity directed against tumor-encoded amino acid substitutions occurs in some melanoma patients. This implicates missense mutations as a source of patient-specific neoantigens. However, a systematic evaluation of these putative neoantigens as targets of antitumor immunity is lacking. Moreover, it remains unknown whether vaccination can augment such responses. We found that a dendritic cell vaccine led to an increase in naturally occurring neoantigen-specific immunity and revealed previously undetected human leukocyte antigen (HLA) class I–restricted neoantigens in patients with advanced melanoma. The presentation of neoantigens by HLA-A*02:01 in human melanoma was confirmed by mass spectrometry. Vaccination promoted a diverse neoantigen-specific T cell receptor (TCR) repertoire in terms of both TCR-β usage and clonal composition. Our results demonstrate that vaccination directed at tumor-encoded amino acid substitutions broadens the antigenic breadth and clonal diversity of antitumor immunity.

1,003 citations


Journal ArticleDOI
TL;DR: In this paper, the Atacama Large Millimeter/submillimeter Array (ALMA) observations from the 2014 Long Baseline Campaign in dust continuum and spectral line emission from the HL Tau region were presented.
Abstract: We present Atacama Large Millimeter/submillimeter Array (ALMA) observations from the 2014 Long Baseline Campaign in dust continuum and spectral line emission from the HL Tau region. The continuum images at wavelengths of 2.9, 1.3, and 0.87 mm have unprecedented angular resolutions of 0. ′′ 075 (10 AU) to 0. ′′ 025 (3.5 AU), revealing an astonishing level of detail in the cir cumstellar disk surrounding the young solar analogue HL Tau, with a pattern of bright and dark rings observed at all wavelengths. By fitting ellipses to the most distinct rings, we measure precise values for the disk inclination (46.72 ◦ ± 0.05 ◦ ) and position angle (+138.02 ◦ ± 0.07 ◦ ). We obtain a high-fidelity image of the 1.0 mm spectral index (�), which ranges from � � 2.0 in the optically-thick central peak and two brightest ring s, increasing to 2.3-3.0 in the dark rings. The dark rings are not devoid of emission, and we estimate a grain emissivity index of 0.8 for the innermost dark ring and lower for subsequent dark rings, consistent with some degree of grain growth and evolution. Additional clues that the rings arise from planet formation incl ude an increase in their central offsets with radius and the presence of numerous orbital resonances. At a resolution of 35 AU, we resolve the molecular component of the disk in HCO + (1-0) which exhibits a pattern over LSR velocities from 2-12 km s -1 consistent with Keplerian motion around a �1.3M⊙ star, although complicated by absorption at low blue-shifted velocities. We also serendipitously detect and resolve the nearby protost ars XZ Tau (A/B) and LkH�358 at 2.9 mm. Subject headings: stars: individual (HL Tau, XZ Tau, LkH�358) — protoplanetary disks — stars: formation — submillimeter: planetary systems — techniques: interferometric

1,003 citations


Journal ArticleDOI
15 Dec 2017-Science
TL;DR: This study demonstrates how atomically dispersed ionic platinum (Pt2+) on ceria (CeO2), which is already thermally stable, can be activated via steam treatment to simultaneously achieve the goals of low-temperature carbon monoxide (CO) oxidation activity while providing outstanding hydrothermal stability.
Abstract: To improve fuel efficiency, advanced combustion engines are being designed to minimize the amount of heat wasted in the exhaust. Hence, future generations of catalysts must perform at temperatures that are 100°C lower than current exhaust-treatment catalysts. Achieving low-temperature activity, while surviving the harsh conditions encountered at high engine loads, remains a formidable challenge. In this study, we demonstrate how atomically dispersed ionic platinum (Pt2+) on ceria (CeO2), which is already thermally stable, can be activated via steam treatment (at 750°C) to simultaneously achieve the goals of low-temperature carbon monoxide (CO) oxidation activity while providing outstanding hydrothermal stability. A new type of active site is created on CeO2 in the vicinity of Pt2+, which provides the improved reactivity. These active sites are stable up to 800°C in oxidizing environments.

1,003 citations


Posted Content
TL;DR: The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract: Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL .

1,003 citations


Proceedings Article
Ramesh Nallapati1, Feifei Zhai1, Bowen Zhou1
12 Feb 2017
TL;DR: SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art.
Abstract: We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable to state-of-the-art. Our model has the additional advantage of being very interpretable, since it allows visualization of its predictions broken up by abstract features such as information content, salience and novelty. Another novel contribution of our work is abstractive training of our extractive model that can train on human generated reference summaries alone, eliminating the need for sentence-level extractive labels.

1,002 citations


Journal ArticleDOI
TL;DR: The COSMOS2015(24) catalog as mentioned in this paper contains precise photometric redshifts and stellar masses for more than half a million objects over the 2deg(2) COSmOS field, which is highly optimized for the study of galaxy evolution and environments in the early universe.
Abstract: We present the COSMOS2015(24) catalog, which contains precise photometric redshifts and stellar masses for more than half a million objects over the 2deg(2) COSMOS field. Including new YJHK(s) images from the UltraVISTA-DR2 survey, Y-band images from Subaru/Hyper-Suprime-Cam, and infrared data from the Spitzer Large Area Survey with the Hyper-Suprime-Cam Spitzer legacy program, this near-infrared-selected catalog is highly optimized for the study of galaxy evolution and environments in the early universe. To maximize catalog completeness for bluer objects and at higher redshifts, objects have been detected on a chi(2) sum of the YJHK(s) and z(++) images. The catalog contains similar to 6 x 10(5) objects in the 1.5 deg(2) UltraVISTA-DR2 region and similar to 1.5 x 10(5) objects are detected in the “ultra-deep stripes” (0.62 deg(2)) at K-s \textless= 24.7 (3 sigma, 3 `', AB magnitude). Through a comparison with the zCOSMOS-bright spectroscopic redshifts, we measure a photometric redshift precision of sigma(Delta z(1) (+ zs)) = 0.007 and a catastrophic failure fraction of eta = 0.5%. At 3 \textless z \textless 6, using the unique database of spectroscopic redshifts in COSMOS, we find sigma(Delta z(1) (+ zs)) = 0.021 and eta = 13.2%. The deepest regions reach a 90% completeness limit of 10(10)M(circle dot) to z = 4. Detailed comparisons of the color distributions, number counts, and clustering show excellent agreement with the literature in the same mass ranges. COSMOS2015 represents a unique, publicly available, valuable resource with which to investigate the evolution of galaxies within their environment back to the earliest stages of the history of the universe. The COSMOS2015 catalog is distributed via anonymous ftp and through the usual astronomical archive systems (CDS, ESO Phase 3, IRSA).

Journal ArticleDOI
Zheng Ye1, Yun Zhang1, Yi Wang1, Zixiang Huang1, Bin Song1 
TL;DR: Ground glass opacities, consolidation, reticular pattern, and crazy paving pattern are typical CT manifestations of COVID-19, and emerging atypical CT manifestations, including airway changes, pleural changes, fibrosis, nodules, etc., were demonstrated in patients.
Abstract: Coronavirus disease 2019 (COVID-19) outbreak, first reported in Wuhan, China, has rapidly swept around the world just within a month, causing global public health emergency. In diagnosis, chest computed tomography (CT) manifestations can supplement parts of limitations of real-time reverse transcription polymerase chain reaction (RT-PCR) assay. Based on a comprehensive literature review and the experience in the frontline, we aim to review the typical and relatively atypical CT manifestations with representative COVID-19 cases at our hospital, and hope to strengthen the recognition of these features with radiologists and help them make a quick and accurate diagnosis. Key Points • Ground glass opacities, consolidation, reticular pattern, and crazy paving pattern are typical CT manifestations of COVID-19. • Emerging atypical CT manifestations, including airway changes, pleural changes, fibrosis, nodules, etc., were demonstrated in COVID-19 patients. • CT manifestations may associate with the progression and prognosis of COVID-19.

Journal ArticleDOI
TL;DR: This updated clinical report provides more practice-based quality improvement guidance on key elements of transition planning, transfer, and integration into adult care for all youth and young adults.
Abstract: Risk and vulnerability encompass many dimensions of the transition from adolescence to adulthood. Transition from pediatric, parent-supervised health care to more independent, patient-centered adult health care is no exception. The tenets and algorithm of the original 2011 clinical report, “Supporting the Health Care Transition from Adolescence to Adulthood in the Medical Home,” are unchanged. This updated clinical report provides more practice-based quality improvement guidance on key elements of transition planning, transfer, and integration into adult care for all youth and young adults. It also includes new and updated sections on definition and guiding principles, the status of health care transition preparation among youth, barriers, outcome evidence, recommended health care transition processes and implementation strategies using quality improvement methods, special populations, education and training in pediatric onset conditions, and payment options. The clinical report also includes new recommendations pertaining to infrastructure, education and training, payment, and research.

Journal ArticleDOI
TL;DR: The Rotation Region Proposal Networks are designed to generate inclined proposals with text orientation angle information that are adapted for bounding box regression to make the proposals more accurately fit into the text region in terms of the orientation.
Abstract: This paper introduces a novel rotation-based framework for arbitrary-oriented text detection in natural scene images. We present the Rotation Region Proposal Networks , which are designed to generate inclined proposals with text orientation angle information. The angle information is then adapted for bounding box regression to make the proposals more accurately fit into the text region in terms of the orientation. The Rotation Region-of-Interest pooling layer is proposed to project arbitrary-oriented proposals to a feature map for a text region classifier. The whole framework is built upon a region-proposal-based architecture, which ensures the computational efficiency of the arbitrary-oriented text detection compared with previous text detection systems. We conduct experiments using the rotation-based framework on three real-world scene text detection datasets and demonstrate its superiority in terms of effectiveness and efficiency over previous approaches.

Proceedings ArticleDOI
28 Mar 2017
TL;DR: This work proposes the use of generative adversarial networks for speech enhancement, and operates at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them.
Abstract: Current speech enhancement techniques operate on the spectral domain and/or exploit some higher-level feature. The majority of them tackle a limited number of noise conditions and rely on first-order statistics. To circumvent these issues, deep networks are being increasingly used, thanks to their ability to learn complex functions from large example sets. In this work, we propose the use of generative adversarial networks for speech enhancement. In contrast to current techniques, we operate at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them. We evaluate the proposed model using an independent, unseen test set with two speakers and 20 alternative noise conditions. The enhanced samples confirm the viability of the proposed model, and both objective and subjective evaluations confirm the effectiveness of it. With that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to improve their performance.

Journal ArticleDOI
TL;DR: Considering the persistence of microplastics in the environment, the high concentrations measured at some environmental sites and the prospective of strongly increasing concentrations, the release of plastics into the environment should be reduced in a broad and global effort regardless of a proof of an environmental risk.
Abstract: Due to the widespread use and durability of synthetic polymers, plastic debris occurs in the environment worldwide. In the present work, information on sources and fate of microplastic particles in the aquatic and terrestrial environment, and on their uptake and effects, mainly in aquatic organisms, is reviewed. Microplastics in the environment originate from a variety of sources. Quantitative information on the relevance of these sources is generally lacking, but first estimates indicate that abrasion and fragmentation of larger plastic items and materials containing synthetic polymers are likely to be most relevant. Microplastics are ingested and, mostly, excreted rapidly by numerous aquatic organisms. So far, there is no clear evidence of bioaccumulation or biomagnification. In laboratory studies, the ingestion of large amounts of microplastics mainly led to a lower food uptake and, consequently, reduced energy reserves and effects on other physiological functions. Based on the evaluated data, the lowest microplastic concentrations affecting marine organisms exposed via water are much higher than levels measured in marine water. In lugworms exposed via sediment, effects were observed at microplastic levels that were higher than those in subtidal sediments but in the same range as maximum levels in beach sediments. Hydrophobic contaminants are enriched on microplastics, but the available experimental results and modelling approaches indicate that the transfer of sorbed pollutants by microplastics is not likely to contribute significantly to bioaccumulation of these pollutants. Prior to being able to comprehensively assess possible environmental risks caused by microplastics a number of knowledge gaps need to be filled. However, in view of the persistence of microplastics in the environment, the high concentrations measured at some environmental sites and the prospective of strongly increasing concentrations, the release of plastics into the environment should be reduced in a broad and global effort regardless of a proof of an environmental risk.

Journal ArticleDOI
23 Sep 2016-Science
TL;DR: This work reports the successful cultivation of multiple HuNoV strains in enterocytes in stem cell–derived, nontransformed human intestinal enteroid monolayer cultures, which recapitulates the human intestinal epithelium, permits human host-pathogen studies of previously noncultivatable pathogens, and allows the assessment of methods to prevent and treat Hu noV infections.
Abstract: The major barrier to research and development of effective interventions for human noroviruses (HuNoVs) has been the lack of a robust and reproducible in vitro cultivation system. HuNoVs are the leading cause of gastroenteritis worldwide. We report the successful cultivation of multiple HuNoV strains in enterocytes in stem cell–derived, nontransformed human intestinal enteroid monolayer cultures. Bile, a critical factor of the intestinal milieu, is required for strain-dependent HuNoV replication. Lack of appropriate histoblood group antigen expression in intestinal cells restricts virus replication, and infectivity is abrogated by inactivation (e.g., irradiation, heating) and serum neutralization. This culture system recapitulates the human intestinal epithelium, permits human host-pathogen studies of previously noncultivatable pathogens, and allows the assessment of methods to prevent and treat HuNoV infections.

Journal ArticleDOI
TL;DR: In patients with moderately to severely active ulcerative colitis, tofacitinib was more effective as induction and maintenance therapy than placebo and was associated with increased lipid levels.
Abstract: BackgroundTofacitinib, an oral, small-molecule Janus kinase inhibitor, was shown to have potential efficacy as induction therapy for ulcerative colitis in a phase 2 trial. We further evaluated the efficacy of tofacitinib as induction and maintenance therapy. MethodsWe conducted three phase 3, randomized, double-blind, placebo-controlled trials of tofacitinib therapy in adults with ulcerative colitis. In the OCTAVE Induction 1 and 2 trials, 598 and 541 patients, respectively, who had moderately to severely active ulcerative colitis despite previous conventional therapy or therapy with a tumor necrosis factor antagonist were randomly assigned to receive induction therapy with tofacitinib (10 mg twice daily) or placebo for 8 weeks. The primary end point was remission at 8 weeks. In the OCTAVE Sustain trial, 593 patients who had a clinical response to induction therapy were randomly assigned to receive maintenance therapy with tofacitinib (either 5 mg or 10 mg twice daily) or placebo for 52 weeks. The primary...

Journal ArticleDOI
TL;DR: In this paper, an extension of sinc interpolation to algebraically decaying functions is presented, where the algebraic order of decay of a function's decay can be estimated everywhere in the horizontal strip of complex plane around the complex plane.
Abstract: An extension of sinc interpolation on $\mathbb{R}$ to the class of algebraically decaying functions is developed in the paper. Similarly to the classical sinc interpolation we establish two types of error estimates. First covers a wider class of functions with the algebraic order of decay on $\mathbb{R}$. The second type of error estimates governs the case when the order of function's decay can be estimated everywhere in the horizontal strip of complex plane around $\mathbb{R}$. The numerical examples are provided.

Journal ArticleDOI
TL;DR: Active surveillance for favorable-risk prostate cancer is feasible and seems safe in the 15-year time frame and the mortality rate is consistent with expected mortality in favorable- risk patients managed with initial definitive intervention.
Abstract: Purpose Active surveillance is increasingly accepted as a treatment option for favorable-risk prostate cancer. Long-term follow-up has been lacking. In this study, we report the long-term outcome of a large active surveillance protocol in men with favorable-risk prostate cancer. Patients and Methods In a prospective single-arm cohort study carried out at a single academic health sciences center, 993 men with favorable- or intermediate-risk prostate cancer were managed with an initial expectant approach. Intervention was offered for a prostate-specific antigen (PSA) doubling time of less than 3 years, Gleason score progression, or unequivocal clinical progression. Main outcome measures were overall and disease-specific survival, rate of treatment, and PSA failure rate in the treated patients. Results Among the 819 survivors, the median follow-up time from the first biopsy is 6.4 years (range, 0.2 to 19.8 years). One hundred forty-nine (15%) of 993 patients died, and 844 patients are alive (censored rate, 8...

Journal ArticleDOI
TL;DR: The Brain Or IntestinaL EstimateD permeation method (BOILED‐Egg) is proposed as an accurate predictive model that works by computing the lipophilicity and polarity of small molecules.
Abstract: Apart from efficacy and toxicity, many drug development failures are imputable to poor pharmacokinetics and bioavailability. Gastrointestinal absorption and brain access are two pharmacokinetic behaviors crucial to estimate at various stages of the drug discovery processes. To this end, the Brain Or IntestinaL EstimateD permeation method (BOILED-Egg) is proposed as an accurate predictive model that works by computing the lipophilicity and polarity of small molecules. Concomitant predictions for both brain and intestinal permeation are obtained from the same two physicochemical descriptors and straightforwardly translated into molecular design, owing to the speed, accuracy, conceptual simplicity and clear graphical output of the model. The BOILED-Egg can be applied in a variety of settings, from the filtering of chemical libraries at the early steps of drug discovery, to the evaluation of drug candidates for development.

Journal ArticleDOI
29 Jul 2020-Nature
TL;DR: The authors summarize the data produced by phase III of the Encyclopedia of DNA Elements (ENCODE) project, a resource for better understanding of the human and mouse genomes, which have produced 5,992 new experimental datasets, including systematic determinations across mouse fetal development.
Abstract: The human and mouse genomes contain instructions that specify RNAs and proteins and govern the timing, magnitude, and cellular context of their production. To better delineate these elements, phase III of the Encyclopedia of DNA Elements (ENCODE) Project has expanded analysis of the cell and tissue repertoires of RNA transcription, chromatin structure and modification, DNA methylation, chromatin looping, and occupancy by transcription factors and RNA-binding proteins. Here we summarize these efforts, which have produced 5,992 new experimental datasets, including systematic determinations across mouse fetal development. All data are available through the ENCODE data portal (https://www.encodeproject.org), including phase II ENCODE1 and Roadmap Epigenomics2 data. We have developed a registry of 926,535 human and 339,815 mouse candidate cis-regulatory elements, covering 7.9 and 3.4% of their respective genomes, by integrating selected datatypes associated with gene regulation, and constructed a web-based server (SCREEN; http://screen.encodeproject.org) to provide flexible, user-defined access to this resource. Collectively, the ENCODE data and registry provide an expansive resource for the scientific community to build a better understanding of the organization and function of the human and mouse genomes.

Proceedings ArticleDOI
05 Jun 2019
TL;DR: In this article, the authors quantified the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP and proposed actionable recommendations to reduce costs and improve equity in NLP research and practice.
Abstract: Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.

Journal ArticleDOI
02 Feb 2016-JAMA
TL;DR: This Viewpoint summarizes the updated recommendations of the US Department of Health and Human Services’ recently released 2015-2020 Dietary Guidelines for Americans.
Abstract: This Viewpoint summarizes the updated recommendations of the US Department of Health and Human Services’ recently released 2015-2020 Dietary Guidelines for Americans.

Posted Content
TL;DR: A review on deep learning methods for semantic segmentation applied to various application areas as well as mandatory background concepts to help researchers decide which are the ones that best suit their needs and their targets.
Abstract: Image semantic segmentation is more and more being of interest for computer vision and machine learning researchers. Many applications on the rise need accurate and efficient segmentation mechanisms: autonomous driving, indoor navigation, and even virtual or augmented reality systems to name a few. This demand coincides with the rise of deep learning approaches in almost every field or application target related to computer vision, including semantic segmentation or scene understanding. This paper provides a review on deep learning methods for semantic segmentation applied to various application areas. Firstly, we describe the terminology of this field as well as mandatory background concepts. Next, the main datasets and challenges are exposed to help researchers decide which are the ones that best suit their needs and their targets. Then, existing methods are reviewed, highlighting their contributions and their significance in the field. Finally, quantitative results are given for the described methods and the datasets in which they were evaluated, following up with a discussion of the results. At last, we point out a set of promising future works and draw our own conclusions about the state of the art of semantic segmentation using deep learning techniques.

Posted Content
TL;DR: In this paper, the authors propose a direct perception approach to estimate the affordance for driving in a video game and train a deep Convolutional Neural Network using recording from 12 hours of human driving.
Abstract: Today, there are two major paradigms for vision-based autonomous driving systems: mediated perception approaches that parse an entire scene to make a driving decision, and behavior reflex approaches that directly map an input image to a driving action by a regressor. In this paper, we propose a third paradigm: a direct perception approach to estimate the affordance for driving. We propose to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving. Our representation provides a set of compact yet complete descriptions of the scene to enable a simple controller to drive autonomously. Falling in between the two extremes of mediated perception and behavior reflex, we argue that our direct perception representation provides the right level of abstraction. To demonstrate this, we train a deep Convolutional Neural Network using recording from 12 hours of human driving in a video game and show that our model can work well to drive a car in a very diverse set of virtual environments. We also train a model for car distance estimation on the KITTI dataset. Results show that our direct perception approach can generalize well to real driving images. Source code and data are available on our project website.

Proceedings ArticleDOI
08 Sep 2016
TL;DR: A generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background is proposed that can generate tiny videos up to a second at full frame rate better than simple baselines.
Abstract: We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.

Journal ArticleDOI

[...]

TL;DR: A method able to both coarse and fine register sets of 3D points provided by low-cost depth-sensing cameras into a common coordinate system, able to overcome the noisy data problem by means of using a model-based solution of multiplane registration.
Abstract: A novel method, µ-MAR, able to both coarse and fine register 3D point sets.The method overcomes noisy data problem using model based planes registration.µ-MAR iteratively registers a 3D markers around the object to be reconstructed.It uses a variant of the multi-view registration with subsets of data.Transformations to register the markers allow to reconstruct the object accurately. Many applications including object reconstruction, robot guidance, and. scene mapping require the registration of multiple views from a scene to generate a complete geometric and appearance model of it. In real situations, transformations between views are unknown and it is necessary to apply expert inference to estimate them. In the last few years, the emergence of low-cost depth-sensing cameras has strengthened the research on this topic, motivating a plethora of new applications. Although they have enough resolution and accuracy for many applications, some situations may not be solved with general state-of-the-art registration methods due to the signal-to-noise ratio (SNR) and the resolution of the data provided. The problem of working with low SNR data, in general terms, may appear in any 3D system, then it is necessary to propose novel solutions in this aspect. In this paper, we propose a method, µ-MAR, able to both coarse and fine register sets of 3D points provided by low-cost depth-sensing cameras, despite it is not restricted to these sensors, into a common coordinate system. The method is able to overcome the noisy data problem by means of using a model-based solution of multiplane registration. Specifically, it iteratively registers 3D markers composed by multiple planes extracted from points of multiple views of the scene. As the markers and the object of interest are static in the scenario, the transformations obtained for the markers are applied to the object in order to reconstruct it. Experiments have been performed using synthetic and real data. The synthetic data allows a qualitative and quantitative evaluation by means of visual inspection and Hausdorff distance respectively. The real data experiments show the performance of the proposal using data acquired by a Primesense Carmine RGB-D sensor. The method has been compared to several state-of-the-art methods. The results show the good performance of the µ-MAR to register objects with high accuracy in presence of noisy data outperforming the existing methods.

Journal ArticleDOI
TL;DR: This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs, and gives a general framework for the algorithms categorized under various settings.
Abstract: Detecting anomalies in data is a vital task, with numerous high-impact applications in areas such as security, finance, health care, and law enforcement. While numerous techniques have been developed in past years for spotting outliers and anomalies in unstructured collections of multi-dimensional points, with graph data becoming ubiquitous, techniques for structured graph data have been of focus recently. As objects in graphs have long-range correlations, a suite of novel technology has been developed for anomaly detection in graph data. This survey aims to provide a general, comprehensive, and structured overview of the state-of-the-art methods for anomaly detection in data represented as graphs. As a key contribution, we give a general framework for the algorithms categorized under various settings: unsupervised versus (semi-)supervised approaches, for static versus dynamic graphs, for attributed versus plain graphs. We highlight the effectiveness, scalability, generality, and robustness aspects of the methods. What is more, we stress the importance of anomaly attribution and highlight the major techniques that facilitate digging out the root cause, or the `why', of the detected anomalies for further analysis and sense-making. Finally, we present several real-world applications of graph-based anomaly detection in diverse domains, including financial, auction, computer traffic, and social networks. We conclude our survey with a discussion on open theoretical and practical challenges in the field.

Journal ArticleDOI
TL;DR: It is recommended that block cross-validation be used wherever dependence structures exist in a dataset, even if no correlation structure is visible in the fitted model residuals, or if the fitted models account for such correlations.
Abstract: Ecological data often show temporal, spatial, hierarchical (random effects), or phylogenetic structure. Modern statistical approaches are increasingly accounting for such dependencies. However, when performing cross-validation, these structures are regularly ignored, resulting in serious underestimation of predictive error. One cause for the poor performance of uncorrected (random) cross-validation, noted often by modellers, are dependence structures in the data that persist as dependence structures in model residuals, violating the assumption of independence. Even more concerning, because often overlooked, is that structured data also provides ample opportunity for overfitting with non-causal predictors. This problem can persist even if remedies such as autoregressive models, generalized least squares, or mixed models are used. Block cross-validation, where data are split strategically rather than randomly, can address these issues. However, the blocking strategy must be carefully considered. Blocking in space, time, random effects or phylogenetic distance, while accounting for dependencies in the data, may also unwittingly induce extrapolations by restricting the ranges or combinations of predictor variables available for model training, thus overestimating interpolation errors. On the other hand, deliberate blocking in predictor space may also improve error estimates when extrapolation is the modelling goal. Here, we review the ecological literature on non-random and blocked cross-validation approaches. We also provide a series of simulations and case studies, in which we show that, for all instances tested, block cross-validation is nearly universally more appropriate than random cross-validation if the goal is predicting to new data or predictor space, or for selecting causal predictors. We recommend that block cross-validation be used wherever dependence structures exist in a dataset, even if no correlation structure is visible in the fitted model residuals, or if the fitted models account for such correlations.