scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The authors highlights the critical role of social capital and networks in disaster survival and recovery and lays out recent literature and evidence on the topic, concluding with concrete policy recommendations for disaster managers, government decision makers, and no...
Abstract: Despite the ubiquity of disaster and the increasing toll in human lives and financial costs, much research and policy remain focused on physical infrastructure–centered approaches to such events. Governmental organizations such as the Department of Homeland Security, United States Federal Emergency Management Agency, United States Agency for International Development, and United Kingdom’s Department for International Development continue to spend heavily on hardening levees, raising existing homes, and repairing damaged facilities despite evidence that social, not physical, infrastructure drives resilience. This article highlights the critical role of social capital and networks in disaster survival and recovery and lays out recent literature and evidence on the topic. We look at definitions of social capital, measurement and proxies, types of social capital, and mechanisms and application. The article concludes with concrete policy recommendations for disaster managers, government decision makers, and no...

1,096 citations


Journal ArticleDOI
TL;DR: In this article, a nonconvex formulation of the phase retrieval problem was proposed and a concrete solution algorithm was presented. But the main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements.
Abstract: We study the problem of recovering the phase from magnitude measurements; specifically, we wish to reconstruct a complex-valued signal $ \boldsymbol {x}\in \mathbb {C}^{n}$ about which we have phaseless samples of the form $y_{r} = \left |{\langle \boldsymbol {a}_{r}, \boldsymbol {x} \rangle }\right |^{2}$ , $r = 1,\ldots , m$ (knowledge of the phase of these samples would yield a linear system). This paper develops a nonconvex formulation of the phase retrieval problem as well as a concrete solution algorithm. In a nutshell, this algorithm starts with a careful initialization obtained by means of a spectral method, and then refines this initial estimate by iteratively applying novel update rules, which have low computational complexity, much like in a gradient descent scheme. The main contribution is that this algorithm is shown to rigorously allow the exact retrieval of phase information from a nearly minimal number of random measurements. Indeed, the sequence of successive iterates provably converges to the solution at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. In theory, a variation on this scheme leads to a near-linear time algorithm for a physically realizable model based on coded diffraction patterns. We illustrate the effectiveness of our methods with various experiments on image data. Underlying our analysis are insights for the analysis of nonconvex optimization schemes that may have implications for computational problems beyond phase retrieval.

1,096 citations


Journal ArticleDOI
TL;DR: CrystalExplorer is a native cross-platform program for the visualization and investigation of molecular crystal structures and its successor, CrystalExplorer 2, is available for iOS and Android.
Abstract: CrystalExplorer is a native cross-platform program supported on Windows, MacOS and Linux with the primary function of visualization and investigation of molecular crystal structures, especially through the decorated Hirshfeld surface and its corresponding two-dimensional fingerprint, and through the visualization of void spaces in the crystal via isosurfaces of the promolecule electron density. Over the past decade, significant changes and enhancements have been incorporated into the program, such as the capacity to accurately and quickly calculate and visualize quantitative intermolecular interactions and, perhaps most importantly, the ability to interface with the Gaussian and NWChem programs to calculate quantum-mechanical properties of molecules. The current version, CrystalExplorer21, incorporates these and other changes, and the software can be downloaded and used free of charge for academic research.

1,096 citations


Journal ArticleDOI
12 Jun 2015-Science
TL;DR: It is suggested that a golden age of animal tracking science has begun and that the upcoming years will be a time of unprecedented exciting discoveries.
Abstract: BACKGROUND The movement of animals makes them fascinating but difficult study subjects. Animal movements underpin many biological phenomena, and understanding them is critical for applications in conservation, health, and food. Traditional approaches to animal tracking used field biologists wielding antennas to record a few dozen locations per animal, revealing only the most general patterns of animal space use. The advent of satellite tracking automated this process, but initially was limited to larger animals and increased the resolution of trajectories to only a few hundred locations per animal. The last few years have shown exponential improvement in tracking technology, leading to smaller tracking devices that can return millions of movement steps for ever-smaller animals. Finally, we have a tool that returns high-resolution data that reveal the detailed facets of animal movement and its many implications for biodiversity, animal ecology, behavior, and ecosystem function. ADVANCES Improved technology has brought animal tracking into the realm of big data, not only through high-resolution movement trajectories, but also through the addition of other on-animal sensors and the integration of remote sensing data about the environment through which these animals are moving. These new data are opening up a breadth of new scientific questions about ecology, evolution, and physiology and enable the use of animals as sensors of the environment. High–temporal resolution movement data also can document brief but important contacts between animals, creating new opportunities to study social networks, as well as interspecific interactions such as competition and predation. With solar panels keeping batteries charged, “lifetime” tracks can now be collected for some species, while broader approaches are aiming for species-wide sampling across multiple populations. Miniaturized tags also help reduce the impact of the devices on the study subjects, improving animal welfare and scientific results. As in other disciplines, the explosion of data volume and variety has created new challenges and opportunities for information management, integration, and analysis. In an exciting interdisciplinary push, biologists, statisticians, and computer scientists have begun to develop new tools that are already leading to new insights and scientific breakthroughs. OUTLOOK We suggest that a golden age of animal tracking science has begun and that the upcoming years will be a time of unprecedented exciting discoveries. Technology continues to improve our ability to track animals, with the promise of smaller tags collecting more data, less invasively, on a greater variety of animals. The big-data tracking studies that are just now being pioneered will become commonplace. If analytical developments can keep pace, the field will be able to develop real-time predictive models that integrate habitat preferences, movement abilities, sensory capacities, and animal memories into movement forecasts. The unique perspective offered by big-data animal tracking enables a new view of animals as naturally evolved sensors of environment, which we think has the potential to help us monitor the planet in completely new ways. A massive multi-individual monitoring program would allow a quorum sensing of our planet, using a variety of species to tap into the diversity of senses that have evolved across animal groups, providing new insight on our world through the sixth sense of the global animal collective. We expect that the field will soon reach a transformational point where these studies do more than inform us about particular species of animals, but allow the animals to teach us about the world.

1,096 citations


Journal ArticleDOI
TL;DR: The analyses suggest the existence of an industry bias that cannot be explained by standard 'Risk of bias' assessments.
Abstract: Background Clinical research affecting how doctors practice medicine is increasingly sponsored by companies that make drugs and medical devices. Previous systematic reviews have found that pharmaceutical industry sponsored studies are more often favorable to the sponsor’s product compared with studies with other sources of sponsorship. This review is an update using more stringent methodology and also investigating sponsorship of device studies. Objectives To investigate whether industry sponsored drug and device studies have more favorable outcomes and differ in risk of bias, compared with studies having other sources of sponsorship. Search methods We searched MEDLINE (1948 to September 2010), EMBASE (1980 to September 2010), the Cochrane Methodology Register (Issue 4, 2010) and Web of Science (August 2011). In addition, we searched reference lists of included papers, previous systematic reviews and author files. Selection criteria Cross-sectional studies, cohort studies, systematic reviews and meta-analyses that quantitatively compared primary research studies of drugs or medical devices sponsored by industry with studies with other sources of sponsorship. We had no language restrictions. Data collection and analysis Two assessors identified potentially relevant papers, and a decision about final inclusion was made by all authors. Two assessors extracted data, and we contacted authors of included papers for additional unpublished data. Outcomes included favorable results, favorable conclusions, effect size, risk of bias and whether the conclusions agreed with the study results. Two assessors assessed risk of bias of included papers. We calculated pooled risk ratios (RR) for dichotomous data (with 95% confidence intervals). Main results Forty-eight papers were included. Industry sponsored studies more often had favorable efficacy results, risk ratio (RR): 1.32 (95% confidence interval (CI): 1.21 to 1.44), harms results RR: 1.87 (95% CI: 1.54 to 2.27) and conclusions RR: 1.31 (95% CI: 1.20 to 1.44) compared with non-industry sponsored studies. Ten papers reported on sponsorship and effect size, but could not be pooled due to differences in their reporting of data. The results were heterogeneous; five papers found larger effect sizes in industry sponsored studies compared with non-industry sponsored studies and five papers did not find a difference in effect size. Only two papers (including 120 device studies) reported separate data for devices and we did not find a difference between drug and device studies on the association between sponsorship and conclusions (test for interaction, P = 0.23). Comparing industry and non-industry sponsored studies, we did not find a difference in risk of bias from sequence generation, allocation concealment and follow-up. However, industry sponsored studies more often had low risk of bias from blinding, RR: 1.32 (95% CI: 1.05 to 1.65), compared with non-industry sponsored studies. In industry sponsored studies, there was less agreement between the results and the conclusions than in non-industry sponsored studies, RR: 0.84 (95% CI: 0.70 to 1.01). Authors' conclusions Sponsorship of drug and device studies by the manufacturing company leads to more favorable results and conclusions than sponsorship by other sources. Our analyses suggest the existence of an industry bias that cannot be explained by standard 'Risk of bias' assessments.

1,095 citations


Journal ArticleDOI
TL;DR: HDBSCAN performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over ePSilon, which allows HDBSCAN to find clusters of varying densities, and be more robust to parameter selection.
Abstract: HDBSCAN: Hierarchical Density-Based Spatial Clustering of Applications with Noise (Campello, Moulavi, and Sander 2013), (Campello et al. 2015). Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. The library also includes support for Robust Single Linkage clustering (Chaudhuri et al. 2014), (Chaudhuri and Dasgupta 2010), GLOSH outlier detection (Campello et al. 2015), and tools for visualizing and exploring cluster structures. Finally support for prediction and soft clustering is also available.

1,095 citations


Journal ArticleDOI
TL;DR: It is proposed that four different types of tumor microenvironment exist based on the presence or absence of tumor-infiltrating lymphocytes and programmed death-ligand 1 (PD-L1) expression, and this stratification is reviewed.
Abstract: Cancer immunotherapy may become a major treatment backbone in many cancers over the next decade. There are numerous immune cell types found in cancers and many components of an immune reaction to cancer. Thus, the tumor has many strategies to evade an immune response. It has been proposed that four different types of tumor microenvironment exist based on the presence or absence of tumor-infiltrating lymphocytes and programmed death-ligand 1 (PD-L1) expression. We review this stratification and the latest in a series of results that shed light on new approaches for rationally designing ideal combination cancer therapies based on tumor immunology.

1,095 citations


Journal ArticleDOI
TL;DR: A panel of leading experts in the field attempts here to define several autophagy‐related terms based on specific biochemical features to formulate recommendations that facilitate the dissemination of knowledge within and outside the field of autophagic research.
Abstract: Over the past two decades, the molecular machinery that underlies autophagic responses has been characterized with ever increasing precision in multiple model organisms. Moreover, it has become clear that autophagy and autophagy-related processes have profound implications for human pathophysiology. However, considerable confusion persists about the use of appropriate terms to indicate specific types of autophagy and some components of the autophagy machinery, which may have detrimental effects on the expansion of the field. Driven by the overt recognition of such a potential obstacle, a panel of leading experts in the field attempts here to define several autophagy-related terms based on specific biochemical features. The ultimate objective of this collaborative exchange is to formulate recommendations that facilitate the dissemination of knowledge within and outside the field of autophagy research.

1,095 citations


Journal ArticleDOI
24 Nov 2017-Science
TL;DR: A type VI CRISPR-Cas system containing the programmable single-effector RNA-guided ribonuclease Cas13 is profiled in order to engineer a Cas13 ortholog capable of robust knockdown and REPAIR presents a promising RNA-editing platform with broad applicability for research, therapeutics, and biotechnology.
Abstract: Nucleic acid editing holds promise for treating genetic disease, particularly at the RNA level, where disease-relevant sequences can be rescued to yield functional protein products. Type VI CRISPR-Cas systems contain the programmable single-effector RNA-guided ribonuclease Cas13. We profiled type VI systems in order to engineer a Cas13 ortholog capable of robust knockdown and demonstrated RNA editing by using catalytically inactive Cas13 (dCas13) to direct adenosine-to-inosine deaminase activity by ADAR2 (adenosine deaminase acting on RNA type 2) to transcripts in mammalian cells. This system, referred to as RNA Editing for Programmable A to I Replacement (REPAIR), which has no strict sequence constraints, can be used to edit full-length transcripts containing pathogenic mutations. We further engineered this system to create a high-specificity variant and minimized the system to facilitate viral delivery. REPAIR presents a promising RNA-editing platform with broad applicability for research, therapeutics, and biotechnology.

1,095 citations


Journal ArticleDOI
11 May 2016-Nature
TL;DR: It is demonstrated that the ZIKVBR infects fetuses, causing intra-uterine growth restriction (IUGR), and crosses the placenta and causes microcephaly by targeting cortical progenitor cells, inducing cell death by apoptosis and autophagy, impairing neurodevelopment.
Abstract: Zika virus (ZIKV) is an arbovirus belonging to the genus Flavivirus (family Flaviviridae) and was first described in 1947 in Uganda following blood analyses of sentinel Rhesus monkeys. Until the twentieth century, the African and Asian lineages of the virus did not cause meaningful infections in humans. However, in 2007, vectored by Aedes aegypti mosquitoes, ZIKV caused the first noteworthy epidemic on the Yap Island in Micronesia. Patients experienced fever, skin rash, arthralgia and conjunctivitis. From 2013 to 2015, the Asian lineage of the virus caused further massive outbreaks in New Caledonia and French Polynesia. In 2013, ZIKV reached Brazil, later spreading to other countries in South and Central America. In Brazil, the virus has been linked to congenital malformations, including microcephaly and other severe neurological diseases, such as Guillain-Barre syndrome. Despite clinical evidence, direct experimental proof showing that the Brazilian ZIKV (ZIKV(BR)) strain causes birth defects remains absent. Here we demonstrate that ZIKV(BR) infects fetuses, causing intrauterine growth restriction, including signs of microcephaly, in mice. Moreover, the virus infects human cortical progenitor cells, leading to an increase in cell death. We also report that the infection of human brain organoids results in a reduction of proliferative zones and disrupted cortical layers. These results indicate that ZIKV(BR) crosses the placenta and causes microcephaly by targeting cortical progenitor cells, inducing cell death by apoptosis and autophagy, and impairing neurodevelopment. Our data reinforce the growing body of evidence linking the ZIKV(BR) outbreak to the alarming number of cases of congenital brain malformations. Our model can be used to determine the efficiency of therapeutic approaches to counteracting the harmful impact of ZIKV(BR) in human neurodevelopment.

1,095 citations


Journal ArticleDOI
21 Apr 2020-JAMA
TL;DR: COVID-19 is thought to have higher mortality than seasonal influenza, even as wide variation is reported, and the pressure on the global health care workforce continues to intensify.
Abstract: Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) continues to spread internationally. Worldwide, more than 100 000 cases of coronavirus disease 2019 (COVID-19, the disease caused by SARS-CoV-2) and more than 3500 deaths have been reported. COVID-19 is thought to have higher mortality than seasonal influenza, even as wide variation is reported. While the World Health Organization (WHO) estimates global mortality at 3.4\\%, South Korea has noted mortality of about 0.6\\%.Vaccine development and research into medical treatment for COVID-19 are under way, but are many months away. Meanwhile, the pressure on the global health care workforce continues to intensify. This pressure takes 2 forms. The first is the potentially overwhelming burden of illnesses that stresses health system capacity and the second is the adverse effects on health care workers, including the risk of infection.

Journal ArticleDOI
TL;DR: Analysis of the tumour immune microenvironment in the context of anti-PD-1 therapy in two fully immunocompetent mouse models of lung adenocarcinoma suggests that upregulation of TIM-3 and other immune checkpoints may be targetable biomarkers associated with adaptive resistance to PD-1 blockade.
Abstract: Despite compelling antitumour activity of antibodies targeting the programmed death 1 (PD-1): programmed death ligand 1 (PD-L1) immune checkpoint in lung cancer, resistance to these therapies has increasingly been observed. In this study, to elucidate mechanisms of adaptive resistance, we analyse the tumour immune microenvironment in the context of anti-PD-1 therapy in two fully immunocompetent mouse models of lung adenocarcinoma. In tumours progressing following response to anti-PD-1 therapy, we observe upregulation of alternative immune checkpoints, notably T-cell immunoglobulin mucin-3 (TIM-3), in PD-1 antibody bound T cells and demonstrate a survival advantage with addition of a TIM-3 blocking antibody following failure of PD-1 blockade. Two patients who developed adaptive resistance to anti-PD-1 treatment also show a similar TIM-3 upregulation in blocking antibody-bound T cells at treatment failure. These data suggest that upregulation of TIM-3 and other immune checkpoints may be targetable biomarkers associated with adaptive resistance to PD-1 blockade.

Journal ArticleDOI
TL;DR: A combined theoretical and experimental study is presented to establish ternary pyrite-type cobalt phosphosulphide (CoPS) as a high-performance Earth-abundant catalyst for electrochemical and photoelectrochemical hydrogen production.
Abstract: The scalable and sustainable production of hydrogen fuel through water splitting demands efficient and robust Earth-abundant catalysts for the hydrogen evolution reaction (HER). Building on promising metal compounds with high HER catalytic activity, such as pyrite structure cobalt disulphide (CoS2), and substituting non-metal elements to tune the hydrogen adsorption free energy could lead to further improvements in catalytic activity. Here we present a combined theoretical and experimental study to establish ternary pyrite-type cobalt phosphosulphide (CoPS) as a high-performance Earth-abundant catalyst for electrochemical and photoelectrochemical hydrogen production. Nanostructured CoPS electrodes achieved a geometrical catalytic current density of 10 mA cm(-2) at overpotentials as low as 48 mV, with outstanding long-term operational stability. Integrated photocathodes of CoPS on n(+)-p-p(+) silicon micropyramids achieved photocurrents up to 35 mA cm(-2) at 0 V versus the reversible hydrogen electrode (RHE), onset photovoltages as high as 450 mV versus RHE, and the most efficient solar-driven hydrogen generation from Earth-abundant systems.


Proceedings ArticleDOI
21 Jul 2017
TL;DR: This paper proposes a novel adaptive attention model with a visual sentinel that sets the new state-of-the-art by a significant margin on image captioning.
Abstract: Attention-based neural encoder-decoder frameworks have been widely adopted for image captioning. Most methods force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as the and of. Other words that may seem visual can often be predicted reliably just from the language model e.g., sign after behind a red stop or phone following talking on a cell. In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time step, our model decides whether to attend to the image (and if so, to which regions) or to the visual sentinel. The model decides whether to attend to the image and where, in order to extract meaningful information for sequential word generation. We test our method on the COCO image captioning 2015 challenge dataset and Flickr30K. Our approach sets the new state-of-the-art by a significant margin.

Journal ArticleDOI
TL;DR: Treatments for obesity include behavioral therapy, pharmacotherapy, and bariatric surgery, and some sequelae of obesity are reversed with weight loss.
Abstract: Obesity is prevalent in the U.S. population and contributes significantly to morbidity and mortality. Treatments include behavioral therapy, pharmacotherapy, and bariatric surgery. Some sequelae of obesity are reversed with weight loss. Maintaining weight loss is a challenge.

Proceedings ArticleDOI
06 Sep 2015
TL;DR: This paper investigates audio-level speech augmentation methods which directly process the raw signal, and presents results on 4 different LVCSR tasks with training data ranging from 100 hours to 1000 hours, to examine the effectiveness of audio augmentation in a variety of data scenarios.
Abstract: Data augmentation is a common strategy adopted to increase the quantity of training data, avoid overfitting and improve robustness of the models. In this paper, we investigate audio-level speech augmentation methods which directly process the raw signal. The method we particularly recommend is to change the speed of the audio signal, producing 3 versions of the original signal with speed factors of 0.9, 1.0 and 1.1. The proposed technique has a low implementation cost, making it easy to adopt. We present results on 4 different LVCSR tasks with training data ranging from 100 hours to 1000 hours, to examine the effectiveness of audio augmentation in a variety of data scenarios. An average relative improvement of 4.3% was observed across the 4 tasks.

Journal ArticleDOI
TL;DR: It is shown that AdapterRemoval v2 compares favorably with existing tools, while offering superior throughput to most alternatives examined here, both for single and multi-threaded operations.
Abstract: As high-throughput sequencing platforms produce longer and longer reads, sequences generated from short inserts, such as those obtained from fossil and degraded material, are increasingly expected to contain adapter sequences. Efficient adapter trimming algorithms are also needed to process the growing amount of data generated per sequencing run. We introduce AdapterRemoval v2, a major revision of AdapterRemoval v1, which introduces (i) striking improvements in throughput, through the use of single instruction, multiple data (SIMD; SSE1 and SSE2) instructions and multi-threading support, (ii) the ability to handle datasets containing reads or read-pairs with different adapters or adapter pairs, (iii) simultaneous demultiplexing and adapter trimming, (iv) the ability to reconstruct adapter sequences from paired-end reads for poorly documented data sets, and (v) native gzip and bzip2 support. We show that AdapterRemoval v2 compares favorably with existing tools, while offering superior throughput to most alternatives examined here, both for single and multi-threaded operations.

Proceedings ArticleDOI
21 Mar 2016
TL;DR: CopyNet as discussed by the authors incorporates copying into neural network-based Seq2Seq learning and proposes a new model called CopyNet with encoder-decoder structure, which can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence.
Abstract: We address an important problem in sequence-to-sequence (Seq2Seq) learning referred to as copying, in which certain segments in the input sequence are selectively replicated in the output sequence. A similar phenomenon is observable in human language communication. For example, humans tend to repeat entity names or even long phrases in conversation. The challenge with regard to copying in Seq2Seq is that new machinery is needed to decide when to perform the operation. In this paper, we incorporate copying into neural network-based Seq2Seq learning and propose a new model called CopyNet with encoder-decoder structure. CopyNet can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence. Our empirical study on both synthetic data sets and real world data sets demonstrates the efficacy of CopyNet. For example, CopyNet can outperform regular RNN-based model with remarkable margins on text summarization tasks.


Journal ArticleDOI
TL;DR: The cause(s) of preeclampsia and the optimal clinical management of the hypertensive disorders of pregnancy remain uncertain; therefore, it is recommended that every hypertensive pregnant woman be offered an opportunity to participate in research, clinical trials, and follow-up studies.
Abstract: These recommendations from the International Society for the Study of Hypertension in Pregnancy (ISSHP) are based on available literature and expert opinion. It is intended that this be a living document, to be updated when needed as more research becomes available to influence good clinical practice. Unfortunately, there is a relative lack of high-quality randomized trials in the field of hypertension in pregnancy compared with studies in essential hypertension outside of pregnancy, and ISSHP encourages greater funding and uptake of collaborative research in this field. Accordingly, the quality of evidence for the recommendations in this document has not been graded although relevant references and explanations are provided for each recommendation. The document will be a living guideline, and we hope to be able to grade recommendations in the future. Guidelines and recommendations for management of hypertension in pregnancy are typically written for implementation in an ideal setting. It is acknowledged that in many parts of the world, it will not be possible to adopt all of these recommendations; for this reason, options for management in less-resourced settings are discussed separately in relation to diagnosis, evaluation, and treatment. This document has been endorsed by the International Society of Obstetric Medicine and the Japanese Society for the Study of Hypertension in Pregnancy. All units managing hypertensive pregnant women should maintain and review uniform departmental management protocols and conduct regular audits of maternal and fetal outcomes. The cause(s) of preeclampsia and the optimal clinical management of the hypertensive disorders of pregnancy remain uncertain; therefore, we recommend that every hypertensive pregnant woman be offered an opportunity to participate in research, clinical trials, and follow-up studies. ### Classification 1. Hypertension in pregnancy may be chronic (predating pregnancy or diagnosed before 20 weeks of pregnancy) or de novo (either preeclampsia or gestational hypertension). 2. Chronic hypertension is associated with adverse …

Proceedings ArticleDOI
TL;DR: A new chest X-rays database, namely ChestX-ray8, is presented, which comprises 108,948 frontal-view X-ray images of 32,717 unique patients with the text-mined eight disease image labels from the associated radiological reports using natural language processing, which is validated using the proposed dataset.
Abstract: The chest X-ray is one of the most commonly accessible radiological examinations for screening and diagnosis of many lung diseases. A tremendous number of X-ray imaging studies accompanied by radiological reports are accumulated and stored in many modern hospitals' Picture Archiving and Communication Systems (PACS). On the other side, it is still an open question how this type of hospital-size knowledge database containing invaluable imaging informatics (i.e., loosely labeled) can be used to facilitate the data-hungry deep learning paradigms in building truly large-scale high precision computer-aided diagnosis (CAD) systems. In this paper, we present a new chest X-ray database, namely "ChestX-ray8", which comprises 108,948 frontal-view X-ray images of 32,717 unique patients with the text-mined eight disease image labels (where each image can have multi-labels), from the associated radiological reports using natural language processing. Importantly, we demonstrate that these commonly occurring thoracic diseases can be detected and even spatially-located via a unified weakly-supervised multi-label image classification and disease localization framework, which is validated using our proposed dataset. Although the initial quantitative results are promising as reported, deep convolutional neural network based "reading chest X-rays" (i.e., recognizing and locating the common disease patterns trained with only image-level labels) remains a strenuous task for fully-automated high precision CAD systems. Data download link: this https URL

Journal ArticleDOI
TL;DR: In this paper, the authors describe a method to transform a set of stellar evolution tracks onto a uniform basis and then interpolate within that basis to construct stellar isochrones, accommodating a broad range of stellar types, from substellar objects to high-mass stars, and phases of evolution, from the pre-main sequence to the white dwarf cooling sequence.
Abstract: I describe a method to transform a set of stellar evolution tracks onto a uniform basis and then interpolate within that basis to construct stellar isochrones. This method accommodates a broad range of stellar types, from substellar objects to high-mass stars, and phases of evolution, from the pre-main sequence to the white dwarf cooling sequence. I discuss situations in which stellar physics leads to departures from the otherwise monotonic relation between initial stellar mass and lifetime, and how these may be dealt with in isochrone construction. I close with convergence tests and recommendations for the number of points in the uniform basis and the mass between tracks in the original grid required to achieve a certain level accuracy in the resulting isochrones. The programs that implement these methods are free and open-source; they may be obtained from the project webpage.1

Journal ArticleDOI
TL;DR: Leptospirosis is among the leading zoonotic causes of morbidity worldwide and accounts for numbers of deaths, which approach or exceed those for other causes of haemorrhagic fever.
Abstract: Background Leptospirosis, a spirochaetal zoonosis, occurs in diverse epidemiological settings and affects vulnerable populations, such as rural subsistence farmers and urban slum dwellers. Although leptospirosis is a life-threatening disease and recognized as an important cause of pulmonary haemorrhage syndrome, the lack of global estimates for morbidity and mortality has contributed to its neglected disease status. Methodology/Principal Findings We conducted a systematic review of published morbidity and mortality studies and databases to extract information on disease incidence and case fatality ratios. Linear regression and Monte Carlo modelling were used to obtain age and gender-adjusted estimates of disease morbidity for countries and Global Burden of Disease (GBD) and WHO regions. We estimated mortality using models that incorporated age and gender-adjusted disease morbidity and case fatality ratios. The review identified 80 studies on disease incidence from 34 countries that met quality criteria. In certain regions, such as Africa, few quality assured studies were identified. The regression model, which incorporated country-specific variables of population structure, life expectancy at birth, distance from the equator, tropical island, and urbanization, accounted for a significant proportion (R2 = 0.60) of the variation in observed disease incidence. We estimate that there were annually 1.03 million cases (95% CI 434,000–1,750,000) and 58,900 deaths (95% CI 23,800–95,900) due to leptospirosis worldwide. A large proportion of cases (48%, 95% CI 40–61%) and deaths (42%, 95% CI 34–53%) were estimated to occur in adult males with age of 20–49 years. Highest estimates of disease morbidity and mortality were observed in GBD regions of South and Southeast Asia, Oceania, Caribbean, Andean, Central, and Tropical Latin America, and East Sub-Saharan Africa. Conclusions/Significance Leptospirosis is among the leading zoonotic causes of morbidity worldwide and accounts for numbers of deaths, which approach or exceed those for other causes of haemorrhagic fever. Highest morbidity and mortality were estimated to occur in resource-poor countries, which include regions where the burden of leptospirosis has been underappreciated


Journal ArticleDOI
TL;DR: This paper reviews available information about the degradation pathways and chemicals that are formed by degradation of the six plastic types that are most widely used in Europe and extrapolate that information to likely pathways and possible degradation products under environmental conditions found on the oceans' surface.
Abstract: Each year vast amounts of plastic are produced worldwide. When released to the environment, plastics accumulate, and plastic debris in the world's oceans is of particular environmental concern. More than 60% of all floating debris in the oceans is plastic and amounts are increasing each year. Plastic polymers in the marine environment are exposed to sunlight, oxidants and physical stress, and over time they weather and degrade. The degradation processes and products must be understood to detect and evaluate potential environmental hazards. Some attention has been drawn to additives and persistent organic pollutants that sorb to the plastic surface, but so far the chemicals generated by degradation of the plastic polymers themselves have not been well studied from an environmental perspective. In this paper we review available information about the degradation pathways and chemicals that are formed by degradation of the six plastic types that are most widely used in Europe. We extrapolate that information to likely pathways and possible degradation products under environmental conditions found on the oceans' surface. The potential degradation pathways and products depend on the polymer type. UV-radiation and oxygen are the most important factors that initiate degradation of polymers with a carbon-carbon backbone, leading to chain scission. Smaller polymer fragments formed by chain scission are more susceptible to biodegradation and therefore abiotic degradation is expected to precede biodegradation. When heteroatoms are present in the main chain of a polymer, degradation proceeds by photo-oxidation, hydrolysis, and biodegradation. Degradation of plastic polymers can lead to low molecular weight polymer fragments, like monomers and oligomers, and formation of new end groups, especially carboxylic acids.

Journal ArticleDOI
Eli A. Stahl1, Eli A. Stahl2, Gerome Breen3, Andreas J. Forstner  +339 moreInstitutions (107)
TL;DR: Genome-wide analysis identifies 30 loci associated with bipolar disorder, allowing for comparisons of shared genes and pathways with other psychiatric disorders, including schizophrenia and depression.
Abstract: Bipolar disorder is a highly heritable psychiatric disorder. We performed a genome-wide association study (GWAS) including 20,352 cases and 31,358 controls of European descent, with follow-up analysis of 822 variants with P < 1 × 10-4 in an additional 9,412 cases and 137,760 controls. Eight of the 19 variants that were genome-wide significant (P < 5 × 10-8) in the discovery GWAS were not genome-wide significant in the combined analysis, consistent with small effect sizes and limited power but also with genetic heterogeneity. In the combined analysis, 30 loci were genome-wide significant, including 20 newly identified loci. The significant loci contain genes encoding ion channels, neurotransmitter transporters and synaptic components. Pathway analysis revealed nine significantly enriched gene sets, including regulation of insulin secretion and endocannabinoid signaling. Bipolar I disorder is strongly genetically correlated with schizophrenia, driven by psychosis, whereas bipolar II disorder is more strongly correlated with major depressive disorder. These findings address key clinical questions and provide potential biological mechanisms for bipolar disorder.

Posted Content
TL;DR: The authors extend the hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models.
Abstract: We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.

Journal ArticleDOI
27 Jul 2018-Science
TL;DR: Methods for achieving inverse design, which aims to discover tailored materials from the starting point of a particular desired functionality, are reviewed.
Abstract: The discovery of new materials can bring enormous societal and technological progress. In this context, exploring completely the large space of potential materials is computationally intractable. Here, we review methods for achieving inverse design, which aims to discover tailored materials from the starting point of a particular desired functionality. Recent advances from the rapidly growing field of artificial intelligence, mostly from the subfield of machine learning, have resulted in a fertile exchange of ideas, where approaches to inverse molecular design are being proposed and employed at a rapid pace. Among these, deep generative models have been applied to numerous classes of materials: rational design of prospective drugs, synthetic routes to organic compounds, and optimization of photovoltaics and redox flow batteries, as well as a variety of other solid-state materials.

Journal ArticleDOI
TL;DR: None of the drug regimens evaluated reduced the rates of HIV-1 acquisition in an intention-to-treat analysis, and adherence to study drugs was low.
Abstract: BackgroundReproductive-age women need effective interventions to prevent the acquisition of human immunodeficiency virus type 1 (HIV-1) infection. MethodsWe conducted a randomized, placebo-controlled trial to assess daily treatment with oral tenofovir disoproxil fumarate (TDF), oral tenofovir–emtricitabine (TDF-FTC), or 1% tenofovir (TFV) vaginal gel as preexposure prophylaxis against HIV-1 infection in women in South Africa, Uganda, and Zimbabwe. HIV-1 testing was performed monthly, and plasma TFV levels were assessed quarterly. ResultsOf 12,320 women who were screened, 5029 were enrolled in the study. The rate of retention in the study was 91% during 5509 person-years of follow-up. A total of 312 HIV-1 infections occurred; the incidence of HIV-1 infection was 5.7 per 100 person-years. In the modified intention-to-treat analysis, the effectiveness was −49.0% with TDF (hazard ratio for infection, 1.49; 95% confidence interval [CI], 0.97 to 2.29), −4.4% with TDF-FTC (hazard ratio, 1.04; 95% CI, 0.73 to 1.4...