scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: A major update of PhenoScanner is presented, including over 150 million genetic variants and more than 65 billion associations with diseases and traits, gene expression, metabolite and protein levels, and epigenetic markers.
Abstract: SUMMARY PhenoScanner is a curated database of publicly available results from large-scale genetic association studies in humans. This online tool facilitates 'phenome scans', where genetic variants are cross-referenced for association with many phenotypes of different types. Here we present a major update of PhenoScanner ('PhenoScanner V2'), including over 150 million genetic variants and more than 65 billion associations (compared to 350 million associations in PhenoScanner V1) with diseases and traits, gene expression, metabolite and protein levels, and epigenetic markers. The query options have been extended to include searches by genes, genomic regions and phenotypes, as well as for genetic variants. All variants are positionally annotated using the Variant Effect Predictor and the phenotypes are mapped to Experimental Factor Ontology terms. Linkage disequilibrium statistics from the 1000 Genomes project can be used to search for phenotype associations with proxy variants. AVAILABILITY AND IMPLEMENTATION PhenoScanner V2 is available at www.phenoscanner.medschl.cam.ac.uk.

643 citations


Posted Content
TL;DR: This work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent and adds user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user- level data.
Abstract: We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes "large step" updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset.

643 citations


Journal ArticleDOI
TL;DR: It is shown that GUIDANCE2 outperforms all previously developed methodologies to detect unreliable MSA regions and provides a set of alternative MSAs which can be useful for downstream analyses.
Abstract: Inference of multiple sequence alignments (MSAs) is a critical part of phylogenetic and comparative genomics studies. However, from the same set of sequences different MSAs are often inferred, depending on the methodologies used and the assumed parameters. Much effort has recently been devoted to improving the ability to identify unreliable alignment regions. Detecting such unreliable regions was previously shown to be important for downstream analyses relying on MSAs, such as the detection of positive selection. Here we developed GUIDANCE2, a new integrative methodology that accounts for: (i) uncertainty in the process of indel formation, (ii) uncertainty in the assumed guide tree and (iii) co-optimal solutions in the pairwise alignments, used as building blocks in progressive alignment algorithms. We compared GUIDANCE2 with seven methodologies to detect unreliable MSA regions using extensive simulations and empirical benchmarks. We show that GUIDANCE2 outperforms all previously developed methodologies. Furthermore, GUIDANCE2 also provides a set of alternative MSAs which can be useful for downstream analyses. The novel algorithm is implemented as a web-server, available at: http: //guidance.tau.ac.il.

643 citations


Journal ArticleDOI
26 Jan 2018-Science
TL;DR: The findings suggest that genetic nurture is ultimately due to genetic variation in the population and is mediated by the environment that parents create for their children.
Abstract: Sequence variants in the parental genomes that are not transmitted to a child (the proband) are often ignored in genetic studies. Here we show that nontransmitted alleles can affect a child through their impacts on the parents and other relatives, a phenomenon we call "genetic nurture." Using results from a meta-analysis of educational attainment, we find that the polygenic score computed for the nontransmitted alleles of 21,637 probands with at least one parent genotyped has an estimated effect on the educational attainment of the proband that is 29.9% (P = 1.6 × 10-14) of that of the transmitted polygenic score. Genetic nurturing effects of this polygenic score extend to other traits. Paternal and maternal polygenic scores have similar effects on educational attainment, but mothers contribute more than fathers to nutrition- and heath-related traits.

643 citations


Journal ArticleDOI
23 Jan 2017-Nature
TL;DR: In this article, an alignment between the global angular momentum of a non-central collision and the spin of emitted particles is presented, revealing that the fluid produced in heavy ion collisions is the most vortical system so far observed.
Abstract: © 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. The extreme energy densities generated by ultra-relativistic collisions between heavy atomic nuclei produce a state of matter that behaves surprisingly like a fluid, with exceptionally high temperature and low viscosity. Non-central collisions have angular momenta of the order of 1,000h, and the resulting fluid may have a strong vortical structure that must be understood to describe the fluid properly. The vortical structure is also of particular interest because the restoration of fundamental symmetries of quantum chromodynamics is expected to produce novel physical effects in the presence of strong vorticity. However, no experimental indications of fluid vorticity in heavy ion collisions have yet been found. Since vorticity represents a local rotational structure of the fluid, spin-orbit coupling can lead to preferential orientati on of particle spins along the direction of rotation. Here we present measurements of an alignment between the global angular momentum of a non-central collision and the spin of emitted particles (in this case the collision occurs between gold nuclei and produces Λ baryons), revealing that the fluid produced in heavy ion collisions is the most vortical system so far observed. (At high energies, this fluid is a quark-gluon plasma.) We find that Λ and hyperons show a positive polarization of the order of a few per cent, consistent with some hydrodynamic predictions. (A hyperon is a particle composed of three quarks, at least one of which is a strange quark; the remainder are up and down quarks, found in protons and neutrons.) A previous measurement that reported a null result, that is, zero polarization, at higher collision energies is seen to be consistent with the trend of our observations, though with larger statistical uncertainties. These data provide experimental access to the vortical structure of the nearly ideal liquid created in a heavy ion collision and should prove valuable in the development of hydrodynamic models that quantitatively connect observations to the theory of the strong force.

643 citations


Journal ArticleDOI
TL;DR: It is asserted that the collected updates in GRCh38 make the newer assembly a more robust substrate for comprehensive analyses that will promote the understanding of human biology and advance the efforts to improve health.
Abstract: The human reference genome assembly plays a central role in nearly all aspects of today's basic and clinical research. GRCh38 is the first coordinate-changing assembly update since 2009; it reflects the resolution of roughly 1000 issues and encompasses modifications ranging from thousands of single base changes to megabase-scale path reorganizations, gap closures, and localization of previously orphaned sequences. We developed a new approach to sequence generation for targeted base updates and used data from new genome mapping technologies and single haplotype resources to identify and resolve larger assembly issues. For the first time, the reference assembly contains sequence-based representations for the centromeres. We also expanded the number of alternate loci to create a reference that provides a more robust representation of human population variation. We demonstrate that the updates render the reference an improved annotation substrate, alter read alignments in unchanged regions, and impact variant interpretation at clinically relevant loci. We additionally evaluated a collection of new de novo long-read haploid assemblies and conclude that although the new assemblies compare favorably to the reference with respect to continuity, error rate, and gene completeness, the reference still provides the best representation for complex genomic regions and coding sequences. We assert that the collected updates in GRCh38 make the newer assembly a more robust substrate for comprehensive analyses that will promote our understanding of human biology and advance our efforts to improve health.

643 citations


Journal ArticleDOI
TL;DR: In this article, the performance of zinc oxide (ZnO) has been improved by tailoring its surface-bulk structure and altering its photogenerated charge transfer pathways with an intention to inhibit the surfacebulk charge carrier recombination.
Abstract: As an alternative to the gold standard TiO2 photocatalyst, the use of zinc oxide (ZnO) as a robust candidate for wastewater treatment is widespread due to its similarity in charge carrier dynamics upon bandgap excitation and the generation of reactive oxygen species in aqueous suspensions with TiO2. However, the large bandgap of ZnO, the massive charge carrier recombination, and the photoinduced corrosion–dissolution at extreme pH conditions, together with the formation of inert Zn(OH)2 during photocatalytic reactions act as barriers for its extensive applicability. To this end, research has been intensified to improve the performance of ZnO by tailoring its surface-bulk structure and by altering its photogenerated charge transfer pathways with an intention to inhibit the surface-bulk charge carrier recombination. For the first time, the several strategies, such as tailoring the intrinsic defects, surface modification with organic compounds, doping with foreign ions, noble metal deposition, heterostructuring with other semiconductors and modification with carbon nanostructures, which have been successfully employed to improve the photoactivity and stability of ZnO are critically reviewed. Such modifications enhance the charge separation and facilitate the generation of reactive oxygenated free radicals, and also the interaction with the pollutant molecules. The synthetic route to obtain hierarchical nanostructured morphologies and study their impact on the photocatalytic performance is explained by considering the morphological influence and the defect-rich chemistry of ZnO. Finally, the crystal facet engineering of polar and non-polar facets and their relevance in photocatalysis is outlined. It is with this intention that the present review directs the further design, tailoring and tuning of the physico-chemical and optoelectronic properties of ZnO for better applications, ranging from photocatalysis to photovoltaics.

643 citations


Journal ArticleDOI
TL;DR: The authors present seven principles that have guided our thinking about emotional intelligence, some of them new, and reformulated our original ability model here guided by these principles, and present a new ability model based on these principles.
Abstract: This article presents seven principles that have guided our thinking about emotional intelligence, some of them new. We have reformulated our original ability model here guided by these principles,...

642 citations


Journal ArticleDOI
TL;DR: In this article, the best practices for measuring and reporting metrics such as capacitance, capacity, coulombic and energy efficiencies, electrochemical impedance, and the energy and power densities of capacitive and pseudocapacitive materials are discussed.
Abstract: Due to the tremendous importance of electrochemical energy storage, numerous new materials and electrode architectures for batteries and supercapacitors have emerged in recent years. Correctly characterizing these systems requires considerable time, effort, and experience to ensure proper metrics are reported. Many new nanomaterials show electrochemical behavior somewhere in between conventional double‐layer capacitor and battery electrode materials, making their characterization a non‐straightforward task. It is understandable that some researchers may be misinformed about how to rigorously characterize their materials and devices, which can result in inflation of their reported data. This is not uncommon considering the current state of the field nearly requires record breaking performance for publication in high‐impact journals. Incorrect characterization and data reporting misleads both the materials and device development communities, and it is the shared responsibility of the community to follow rigorous reporting methodologies to ensure published results are reliable to ensure constructive progress. This tutorial aims to clarify the main causes of inaccurate data reporting and to give examples of how researchers should proceed. The best practices for measuring and reporting metrics such as capacitance, capacity, coulombic and energy efficiencies, electrochemical impedance, and the energy and power densities of capacitive and pseudocapacitive materials are discussed.

642 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: A single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses is proposed, which substantially outperforms other recent CNN-based approaches when they are all used without postprocessing.
Abstract: We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task [10] that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster - 50 fps on a Titan X (Pascal) GPU - and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by [27, 28] that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm. For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent CNN-based approaches [10, 25] when they are all used without postprocessing. During post-processing, a pose refinement step can be used to boost the accuracy of these two methods, but at 10 fps or less, they are much slower than our method.

642 citations


Book ChapterDOI
20 Oct 2020
TL;DR: This chapter focuses on one of the key processes in the ‘cultural circuit’ – the practices of representation – and draws a distinction between three different accounts or theories: the reflective, the intentional and the constructionist approaches to representation.
Abstract: Representation is an essential part of the process by which meaning is produced and exchanged between members of a culture. It does involve the use of language, of signs and images which stand for or represent things. Representation is the production of the meaning of the concepts in our minds through language. It is the link between concepts and language which enables us to refer to either ‘real’ world of objects, people or events, or indeed to imaginary worlds of fictional objects, people and events. Meaning depends on relationship between things in the world – people, objects and events, real or fictional – and conceptual system, which can operate as mental representations of them. At the heart of meaning process in culture, then, are related ‘systems of representation’. The related ‘systems of representation’ enables us to give meaning to world by constructing a set of correspondences or a chain of equivalences between things – people, objects, events, abstract ideas, etc.

Posted Content
TL;DR: It is argued that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge and a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS) is proposed.
Abstract: A common way to speed up training of large convolutional networks is to add computational units Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with mini-batch divided between computational units With an increase in the number of nodes, the batch size grows But training with large batch size often results in the lower model accuracy We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge To overcome this optimization difficulties we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS) Using LARS, we scaled Alexnet up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in accuracy

Journal ArticleDOI
TL;DR: Low-temperature, solution-phase growth of cesium lead halide nanowires exhibiting low-threshold lasing and high stability are reported, which makes these nanowire lasers attractive for device fabrication.
Abstract: The rapidly growing field of nanoscale lasers can be advanced through the discovery of new, tunable light sources. The emission wavelength tunability demonstrated in perovskite materials is an attractive property for nanoscale lasers. Whereas organic-inorganic lead halide perovskite materials are known for their instability, cesium lead halides offer a robust alternative without sacrificing emission tunability or ease of synthesis. Here, we report the low-temperature, solution-phase growth of cesium lead halide nanowires exhibiting low-threshold lasing and high stability. The as-grown nanowires are single crystalline with well-formed facets, and act as high-quality laser cavities. The nanowires display excellent stability while stored and handled under ambient conditions over the course of weeks. Upon optical excitation, Fabry-Perot lasing occurs in CsPbBr3 nanowires with an onset of 5 μJ cm(-2) with the nanowire cavity displaying a maximum quality factor of 1,009 ± 5. Lasing under constant, pulsed excitation can be maintained for over 1 h, the equivalent of 10(9) excitation cycles, and lasing persists upon exposure to ambient atmosphere. Wavelength tunability in the green and blue regions of the spectrum in conjunction with excellent stability makes these nanowire lasers attractive for device fabrication.

Journal ArticleDOI
TL;DR: An experimental approach to measuring the exciton binding energy of monolayer WS2 with linear differential transmission spectroscopy and two-photon photoluminescence excitation spectroscopic is reported.
Abstract: The optical properties of monolayer transition metal dichalcogenides (TMDC) feature prominent excitonic natures. Here we report an experimental approach to measuring the exciton binding energy of monolayer WS2 with linear differential transmission spectroscopy and two-photon photoluminescence excitation spectroscopy (TP-PLE). TP-PLE measurements show the exciton binding energy of 0.71 ± 0.01 eV around K valley in the Brillouin zone.

Journal ArticleDOI
TL;DR: The expression pattern of ACE2 across > 150 different cell types corresponding to all major human tissues and organs based on stringent immunohistochemical analysis constitutes an important resource for further studies on SARS‐CoV‐2 host cell entry, to understand the biology of the disease and to aid in the development of effective treatments to the viral infection.
Abstract: The novel SARS-coronavirus 2 (SARS-CoV-2) poses a global challenge on healthcare and society. For understanding the susceptibility for SARS-CoV-2 infection, the cell type-specific expression of the host cell surface receptor is necessary. The key protein suggested to be involved in host cell entry is angiotensin I converting enzyme 2 (ACE2). Here, we report the expression pattern of ACE2 across > 150 different cell types corresponding to all major human tissues and organs based on stringent immunohistochemical analysis. The results were compared with several datasets both on the mRNA and protein level. ACE2 expression was mainly observed in enterocytes, renal tubules, gallbladder, cardiomyocytes, male reproductive cells, placental trophoblasts, ductal cells, eye, and vasculature. In the respiratory system, the expression was limited, with no or only low expression in a subset of cells in a few individuals, observed by one antibody only. Our data constitute an important resource for further studies on SARS-CoV-2 host cell entry, in order to understand the biology of the disease and to aid in the development of effective treatments to the viral infection.

Journal ArticleDOI
TL;DR: The estimated US national MS prevalence for 2010 is the highest reported to date and provides evidence that the north-south gradient persists and has the potential to be used for other chronic neurologic conditions.
Abstract: Objective To generate a national multiple sclerosis (MS) prevalence estimate for the United States by applying a validated algorithm to multiple administrative health claims (AHC) datasets. Methods A validated algorithm was applied to private, military, and public AHC datasets to identify adult cases of MS between 2008 and 2010. In each dataset, we determined the 3-year cumulative prevalence overall and stratified by age, sex, and census region. We applied insurance-specific and stratum-specific estimates to the 2010 US Census data and pooled the findings to calculate the 2010 prevalence of MS in the United States cumulated over 3 years. We also estimated the 2010 prevalence cumulated over 10 years using 2 models and extrapolated our estimate to 2017. Results The estimated 2010 prevalence of MS in the US adult population cumulated over 10 years was 309.2 per 100,000 (95% confidence interval [CI] 308.1–310.1), representing 727,344 cases. During the same time period, the MS prevalence was 450.1 per 100,000 (95% CI 448.1–451.6) for women and 159.7 (95% CI 158.7–160.6) for men (female:male ratio 2.8). The estimated 2010 prevalence of MS was highest in the 55- to 64-year age group. A US north-south decreasing prevalence gradient was identified. The estimated MS prevalence is also presented for 2017. Conclusion The estimated US national MS prevalence for 2010 is the highest reported to date and provides evidence that the north-south gradient persists. Our rigorous algorithm-based approach to estimating prevalence is efficient and has the potential to be used for other chronic neurologic conditions.

Proceedings ArticleDOI
17 May 2015
TL;DR: VC3 is the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results.
Abstract: We present VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hyper visor out of the TCB, thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes. Experimental results on common benchmarks show that VC3 performs well compared with unprotected Hadoop: VC3's average runtime overhead is negligible for its base security guarantees, 4.5% with write integrity and 8% with read/write integrity.

Journal ArticleDOI
28 Aug 2015-Science
TL;DR: It is reported that symbiotic members of the human gut microbiota induce a distinct Treg population in the mouse colon, which constrains immuno-inflammatory responses.
Abstract: T regulatory cells that express the transcription factor Foxp3 (Foxp3+ Tregs) promote tissue homeostasis in several settings. We now report that symbiotic members of the human gut microbiota induce a distinct Treg population in the mouse colon, which constrains immuno-inflammatory responses. This induction—which we find to map to a broad, but specific, array of individual bacterial species—requires the transcription factor Rorγ, paradoxically, in that Rorγ is thought to antagonize FoxP3 and to promote T helper 17 (TH17) cell differentiation. Rorγ’s transcriptional footprint differs in colonic Tregs and TH17 cells and controls important effector molecules. Rorγ, and the Tregs that express it, contribute substantially to regulating colonic TH1/TH17 inflammation. Thus, the marked context-specificity of Rorγ results in very different outcomes even in closely related cell types.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: It is demonstrated that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets, and it is shown that appearance models can be learned efficiently via a regularized least squares framework.
Abstract: This paper revisits the classical multiple hypotheses tracking (MHT) algorithm in a tracking-by-detection framework. The success of MHT largely depends on the ability to maintain a small list of potential hypotheses, which can be facilitated with the accurate object detectors that are currently available. We demonstrate that a classical MHT implementation from the 90's can come surprisingly close to the performance of state-of-the-art methods on standard benchmark datasets. In order to further utilize the strength of MHT in exploiting higher-order information, we introduce a method for training online appearance models for each track hypothesis. We show that appearance models can be learned efficiently via a regularized least squares framework, requiring only a few extra operations for each hypothesis branch. We obtain state-of-the-art results on popular tracking-by-detection datasets such as PETS and the recent MOT challenge.

Journal ArticleDOI
TL;DR: A large study on the error patterns for the MiSeq based on 16S rRNA amplicon sequencing data is conducted and it is shown that the library preparation method and the choice of primers are the most significant sources of bias and cause distinct error patterns.
Abstract: With read lengths of currently up to 2 × 300 bp, high throughput and low sequencing costs Illumina's MiSeq is becoming one of the most utilized sequencing platforms worldwide. The platform is manageable and affordable even for smaller labs. This enables quick turnaround on a broad range of applications such as targeted gene sequencing, metagenomics, small genome sequencing and clinical molecular diagnostics. However, Illumina error profiles are still poorly understood and programs are therefore not designed for the idiosyncrasies of Illumina data. A better knowledge of the error patterns is essential for sequence analysis and vital if we are to draw valid conclusions. Studying true genetic variation in a population sample is fundamental for understanding diseases, evolution and origin. We conducted a large study on the error patterns for the MiSeq based on 16S rRNA amplicon sequencing data. We tested state-of-the-art library preparation methods for amplicon sequencing and showed that the library preparation method and the choice of primers are the most significant sources of bias and cause distinct error patterns. Furthermore we tested the efficiency of various error correction strategies and identified quality trimming (Sickle) combined with error correction (BayesHammer) followed by read overlapping (PANDAseq) as the most successful approach, reducing substitution error rates on average by 93%.

Journal ArticleDOI
TL;DR: Corticosteroids significantly reduced hearing loss and neurological sequelae, but did not reduce overall mortality, and data support the use of corticosteroid in patients with bacterial meningitis.
Abstract: Background In experimental studies, the outcome of bacterial meningitis has been related to the severity of inflammation in the subarachnoid space. Corticosteroids reduce this inflammatory response. Objectives To examine the effect of adjuvant corticosteroid therapy versus placebo on mortality, hearing loss and neurological sequelae in people of all ages with acute bacterial meningitis. Search methods We searched CENTRAL (2015, Issue 1), MEDLINE (1966 to January week 4, 2015), EMBASE (1974 to February 2015), Web of Science (2010 to February 2015), CINAHL (2010 to February 2015) and LILACS (2010 to February 2015). Selection criteria Randomised controlled trials (RCTs) of corticosteroids for acute bacterial meningitis. Data collection and analysis We scored RCTs for methodological quality. We collected outcomes and adverse effects. We performed subgroup analyses for children and adults, causative organisms, low-income versus high-income countries, time of steroid administration and study quality. Main results We included 25 studies involving 4121 participants (2511 children and 1517 adults; 93 mixed population). Four studies were of high quality with no risk of bias, 14 of medium quality and seven of low quality, indicating a moderate risk of bias for the total analysis. Nine studies were performed in low-income countries and 16 in high-income countries. Corticosteroids were associated with a non-significant reduction in mortality (17.8% versus 19.9%; risk ratio (RR) 0.90, 95% confidence interval (CI) 0.80 to 1.01, P value = 0.07). A similar non-significant reduction in mortality was observed in adults receiving corticosteroids (RR 0.74, 95% CI 0.53 to 1.05, P value = 0.09). Corticosteroids were associated with lower rates of severe hearing loss (RR 0.67, 95% CI 0.51 to 0.88), any hearing loss (RR 0.74, 95% CI 0.63 to 0.87) and neurological sequelae (RR 0.83, 95% CI 0.69 to 1.00). Subgroup analyses for causative organisms showed that corticosteroids reduced mortality in Streptococcus pneumoniae (S. pneumoniae) meningitis (RR 0.84, 95% CI 0.72 to 0.98), but not in Haemophilus influenzae (H. influenzae) orNeisseria meningitidis (N. meningitidis) meningitis. Corticosteroids reduced severe hearing loss in children with H. influenzae meningitis (RR 0.34, 95% CI 0.20 to 0.59) but not in children with meningitis due to non-Haemophilus species. In high-income countries, corticosteroids reduced severe hearing loss (RR 0.51, 95% CI 0.35 to 0.73), any hearing loss (RR 0.58, 95% CI 0.45 to 0.73) and short-term neurological sequelae (RR 0.64, 95% CI 0.48 to 0.85). There was no beneficial effect of corticosteroid therapy in low-income countries. Subgroup analysis for study quality showed no effect of corticosteroids on severe hearing loss in high-quality studies. Corticosteroid treatment was associated with an increase in recurrent fever (RR 1.27, 95% CI 1.09 to 1.47), but not with other adverse events. Authors' conclusions Corticosteroids significantly reduced hearing loss and neurological sequelae, but did not reduce overall mortality. Data support the use of corticosteroids in patients with bacterial meningitis in high-income countries. We found no beneficial effect in low-income countries.

Posted Content
TL;DR: This paper proposes a novel UDA framework based on an iterative self-training (ST) procedure, where the problem is formulated as latent variable loss minimization, and can be solved by alternatively generating pseudo labels on target data and re-training the model with these labels.
Abstract: Recent deep networks achieved state of the art performance on a variety of semantic segmentation tasks. Despite such progress, these models often face challenges in real world `wild tasks' where large difference between labeled training/source data and unseen test/target data exists. In particular, such difference is often referred to as `domain gap', and could cause significantly decreased performance which cannot be easily remedied by further increasing the representation power. Unsupervised domain adaptation (UDA) seeks to overcome such problem without target domain labels. In this paper, we propose a novel UDA framework based on an iterative self-training procedure, where the problem is formulated as latent variable loss minimization, and can be solved by alternatively generating pseudo labels on target data and re-training the model with these labels. On top of self-training, we also propose a novel class-balanced self-training framework to avoid the gradual dominance of large classes on pseudo-label generation, and introduce spatial priors to refine generated labels. Comprehensive experiments show that the proposed methods achieve state of the art semantic segmentation performance under multiple major UDA settings.

Journal ArticleDOI
TL;DR: Signs of the Weyl fermion chiral anomaly in the magneto-transport of TaAs are reported and it is observed that high mobility TaAs samples become more conductive as a magnetic field is applied along the direction of the current for certain ranges of the field strength.
Abstract: Weyl semimetals provide the realization of Weyl fermions in solid-state physics. Among all the physical phenomena that are enabled by Weyl semimetals, the chiral anomaly is the most unusual one. Here, we report signatures of the chiral anomaly in the magneto-transport measurements on the first Weyl semimetal TaAs. We show negative magnetoresistance under parallel electric and magnetic fields, that is, unlike most metals whose resistivity increases under an external magnetic field, we observe that our high mobility TaAs samples become more conductive as a magnetic field is applied along the direction of the current for certain ranges of the field strength. We present systematically detailed data and careful analyses, which allow us to exclude other possible origins of the observed negative magnetoresistance. Our transport data, corroborated by photoemission measurements, first-principles calculations and theoretical analyses, collectively demonstrate signatures of the Weyl fermion chiral anomaly in the magneto-transport of TaAs.

Posted Content
TL;DR: Develop AI-based automated CT image analysis tools for detection, quantification, and tracking of Coronavirus demonstrate they can differentiate coronavirus patients from non-patients and measure the progression of disease in each patient over time using a 3D volume review.
Abstract: Purpose: Develop AI-based automated CT image analysis tools for detection, quantification, and tracking of Coronavirus; demonstrate they can differentiate coronavirus patients from non-patients. Materials and Methods: Multiple international datasets, including from Chinese disease-infected areas were included. We present a system that utilizes robust 2D and 3D deep learning models, modifying and adapting existing AI models and combining them with clinical understanding. We conducted multiple retrospective experiments to analyze the performance of the system in the detection of suspected COVID-19 thoracic CT features and to evaluate evolution of the disease in each patient over time using a 3D volume review, generating a Corona score. The study includes a testing set of 157 international patients (China and U.S). Results: Classification results for Coronavirus vs Non-coronavirus cases per thoracic CT studies were 0.996 AUC (95%CI: 0.989-1.00) ; on datasets of Chinese control and infected patients. Possible working point: 98.2% sensitivity, 92.2% specificity. For time analysis of Coronavirus patients, the system output enables quantitative measurements for smaller opacities (volume, diameter) and visualization of the larger opacities in a slice-based heat map or a 3D volume display. Our suggested Corona score measures the progression of disease over time. Conclusion: This initial study, which is currently being expanded to a larger population, demonstrated that rapidly developed AI-based image analysis can achieve high accuracy in detection of Coronavirus as well as quantification and tracking of disease burden.

Journal ArticleDOI
TL;DR: The mouse I105N/human I104N mutation, which has been shown to prevent macrophage pyroptosis, attenuated both cell killing by p30 in a 293T transient overexpression system and membrane permeabilization in vitro, suggesting that the mutants are actually hypomorphs, but must be above certain concentration to exhibit activity.
Abstract: Gasdermin-D (GsdmD) is a critical mediator of innate immune defense because its cleavage by the inflammatory caspases 1, 4, 5, and 11 yields an N-terminal p30 fragment that induces pyroptosis, a death program important for the elimination of intracellular bacteria. Precisely how GsdmD p30 triggers pyroptosis has not been established. Here we show that human GsdmD p30 forms functional pores within membranes. When liberated from the corresponding C-terminal GsdmD p20 fragment in the presence of liposomes, GsdmD p30 localized to the lipid bilayer, whereas p20 remained in the aqueous environment. Within liposomes, p30 existed as higher-order oligomers and formed ring-like structures that were visualized by negative stain electron microscopy. These structures appeared within minutes of GsdmD cleavage and released Ca(2+) from preloaded liposomes. Consistent with GsdmD p30 favoring association with membranes, p30 was only detected in the membrane-containing fraction of immortalized macrophages after caspase-11 activation by lipopolysaccharide. We found that the mouse I105N/human I104N mutation, which has been shown to prevent macrophage pyroptosis, attenuated both cell killing by p30 in a 293T transient overexpression system and membrane permeabilization in vitro, suggesting that the mutants are actually hypomorphs, but must be above certain concentration to exhibit activity. Collectively, our data suggest that GsdmD p30 kills cells by forming pores that compromise the integrity of the cell membrane.

Posted Content
TL;DR: This work explores and releases two BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically, and demonstrates that using a domain-specific model yields performance improvements on 3/5 clinical NLP tasks, establishing a new state-of-the-art on the MedNLI dataset.
Abstract: Contextual word embedding models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have dramatically improved performance for many natural language processing (NLP) tasks in recent months. However, these models have been minimally explored on specialty corpora, such as clinical text; moreover, in the clinical domain, no publicly-available pre-trained BERT models yet exist. In this work, we address this need by exploring and releasing BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically. We demonstrate that using a domain-specific model yields performance improvements on three common clinical NLP tasks as compared to nonspecific embeddings. These domain-specific models are not as performant on two clinical de-identification tasks, and argue that this is a natural consequence of the differences between de-identified source text and synthetically non de-identified task text.

Journal ArticleDOI
TL;DR: This tutorial review summarizes the recent progress in the development of specific AIEgen-based light-up bioprobes and hopes to provide guidelines for the design of more advanced AIE sensing and imaging platforms with high selectivity, great sensitivity and wide adaptability to a broad range of biomedical applications.
Abstract: Driven by the high demand for sensitive and specific tools for optical sensing and imaging, bioprobes with various working mechanisms and advanced functionalities are flourishing at an incredible speed. Conventional fluorescent probes suffer from the notorious effect of aggregation-caused quenching that imposes limitation on their labelling efficiency or concentration to achieve desired sensitivity. The recently emerged fluorogens with an aggregation-induced emission (AIE) feature offer a timely remedy to tackle the challenge. Utilizing the unique properties of AIE fluorogens (AIEgens), specific light-up probes have been constructed through functionalization with recognition elements, showing advantages such as low background interference, a high signal to noise ratio and superior photostability with activatable therapeutic effects. In this tutorial review, we summarize the recent progress in the development of specific AIEgen-based light-up bioprobes. Through illustration of their operation mechanisms and application examples, we hope to provide guidelines for the design of more advanced AIE sensing and imaging platforms with high selectivity, great sensitivity and wide adaptability to a broad range of biomedical applications.

Proceedings ArticleDOI
19 Oct 2017
TL;DR: Comprehensive experimental results show that the proposed ACMR method is superior in learning effective subspace representation and that it significantly outperforms the state-of-the-art cross-modal retrieval methods.
Abstract: Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g., texts vs. images). The core of cross-modal retrieval research is to learn a common subspace where the items of different modalities can be directly compared to each other. In this paper, we present a novel Adversarial Cross-Modal Retrieval (ACMR) method, which seeks an effective common subspace based on adversarial learning. Adversarial learning is implemented as an interplay between two processes. The first process, a feature projector, tries to generate a modality-invariant representation in the common subspace and to confuse the other process, modality classifier, which tries to discriminate between different modalities based on the generated representation. We further impose triplet constraints on the feature projector in order to minimize the gap among the representations of all items from different modalities with same semantic labels, while maximizing the distances among semantically different images and texts. Through the joint exploitation of the above, the underlying cross-modal semantic structure of multimedia data is better preserved when this data is projected into the common subspace. Comprehensive experimental results on four widely used benchmark datasets show that the proposed ACMR method is superior in learning effective subspace representation and that it significantly outperforms the state-of-the-art cross-modal retrieval methods.

Journal ArticleDOI
04 Jan 2021
TL;DR: In this article, a decision analytical model assessed the relative amount of transmission from presymptomatic, never symptomatic, and symptomatic individuals across a range of scenarios in which the proportion of transmissions from people who never develop symptoms (i.e., remain asymptotic) and the infectious period were varied according to published best estimates.
Abstract: Importance Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the etiology of coronavirus disease 2019 (COVID-19), is readily transmitted person to person. Optimal control of COVID-19 depends on directing resources and health messaging to mitigation efforts that are most likely to prevent transmission, but the relative importance of such measures has been disputed. Objective To assess the proportion of SARS-CoV-2 transmissions in the community that likely occur from persons without symptoms. Design, Setting, and Participants This decision analytical model assessed the relative amount of transmission from presymptomatic, never symptomatic, and symptomatic individuals across a range of scenarios in which the proportion of transmission from people who never develop symptoms (ie, remain asymptomatic) and the infectious period were varied according to published best estimates. For all estimates, data from a meta-analysis was used to set the incubation period at a median of 5 days. The infectious period duration was maintained at 10 days, and peak infectiousness was varied between 3 and 7 days (−2 and +2 days relative to the median incubation period). The overall proportion of SARS-CoV-2 was varied between 0% and 70% to assess a wide range of possible proportions. Main Outcomes and Measures Level of transmission of SARS-CoV-2 from presymptomatic, never symptomatic, and symptomatic individuals. Results The baseline assumptions for the model were that peak infectiousness occurred at the median of symptom onset and that 30% of individuals with infection never develop symptoms and are 75% as infectious as those who do develop symptoms. Combined, these baseline assumptions imply that persons with infection who never develop symptoms may account for approximately 24% of all transmission. In this base case, 59% of all transmission came from asymptomatic transmission, comprising 35% from presymptomatic individuals and 24% from individuals who never develop symptoms. Under a broad range of values for each of these assumptions, at least 50% of new SARS-CoV-2 infections was estimated to have originated from exposure to individuals with infection but without symptoms. Conclusions and Relevance In this decision analytical model of multiple scenarios of proportions of asymptomatic individuals with COVID-19 and infectious periods, transmission from asymptomatic individuals was estimated to account for more than half of all transmissions. In addition to identification and isolation of persons with symptomatic COVID-19, effective control of spread will require reducing the risk of transmission from people with infection who do not have symptoms. These findings suggest that measures such as wearing masks, hand hygiene, social distancing, and strategic testing of people who are not ill will be foundational to slowing the spread of COVID-19 until safe and effective vaccines are available and widely used.

Posted Content
TL;DR: This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision and introduces Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation.
Abstract: This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB.