scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: These remdesivir, hydroxychloroquine, lopinavir, and interferon regimens had little or no effect on hospitalized patients with Covid-19, as indicated by overall mortality, initiation of ventilation, and duration of hospital stay.
Abstract: Background World Health Organization expert groups recommended mortality trials of four repurposed antiviral drugs - remdesivir, hydroxychloroquine, lopinavir, and interferon beta-1a - in patients hospitalized with coronavirus disease 2019 (Covid-19). Methods We randomly assigned inpatients with Covid-19 equally between one of the trial drug regimens that was locally available and open control (up to five options, four active and the local standard of care). The intention-to-treat primary analyses examined in-hospital mortality in the four pairwise comparisons of each trial drug and its control (drug available but patient assigned to the same care without that drug). Rate ratios for death were calculated with stratification according to age and status regarding mechanical ventilation at trial entry. Results At 405 hospitals in 30 countries, 11,330 adults underwent randomization; 2750 were assigned to receive remdesivir, 954 to hydroxychloroquine, 1411 to lopinavir (without interferon), 2063 to interferon (including 651 to interferon plus lopinavir), and 4088 to no trial drug. Adherence was 94 to 96% midway through treatment, with 2 to 6% crossover. In total, 1253 deaths were reported (median day of death, day 8; interquartile range, 4 to 14). The Kaplan-Meier 28-day mortality was 11.8% (39.0% if the patient was already receiving ventilation at randomization and 9.5% otherwise). Death occurred in 301 of 2743 patients receiving remdesivir and in 303 of 2708 receiving its control (rate ratio, 0.95; 95% confidence interval [CI], 0.81 to 1.11; P = 0.50), in 104 of 947 patients receiving hydroxychloroquine and in 84 of 906 receiving its control (rate ratio, 1.19; 95% CI, 0.89 to 1.59; P = 0.23), in 148 of 1399 patients receiving lopinavir and in 146 of 1372 receiving its control (rate ratio, 1.00; 95% CI, 0.79 to 1.25; P = 0.97), and in 243 of 2050 patients receiving interferon and in 216 of 2050 receiving its control (rate ratio, 1.16; 95% CI, 0.96 to 1.39; P = 0.11). No drug definitely reduced mortality, overall or in any subgroup, or reduced initiation of ventilation or hospitalization duration. Conclusions These remdesivir, hydroxychloroquine, lopinavir, and interferon regimens had little or no effect on hospitalized patients with Covid-19, as indicated by overall mortality, initiation of ventilation, and duration of hospital stay. (Funded by the World Health Organization; ISRCTN Registry number, ISRCTN83971151; ClinicalTrials.gov number, NCT04315948.).

2,001 citations


Journal ArticleDOI
Tony Mok1, Yi-Long Wu, Iveta Kudaba, Dariusz M. Kowalski2  +242 moreInstitutions (11)
TL;DR: Overall survival was significantly longer in the pembrolizumab group than in the chemotherapy group in all three TPS populations, and the benefit-to-risk profile suggests that first-line pembrology monotherapy can be extended as first line therapy for locally advanced or metastatic non-small-cell lung cancer patients with sensitising EGFR or ALK translocation.

1,999 citations


Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this paper, a stacked attention network (SAN) is proposed to learn to answer natural language questions from images by using semantic representation of a question as query to search for the regions in an image that are related to the answer.
Abstract: This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.

1,998 citations


Journal ArticleDOI
TL;DR: Recent updates of RU are introduced, focusing on technical issues concerning the submission and updating of Repbase entries and will give short examples of using RU data.
Abstract: Repbase Update (RU) is a database of representative repeat sequences in eukaryotic genomes. Since its first development as a database of human repetitive sequences in 1992, RU has been serving as a well-curated reference database fundamental for almost all eukaryotic genome sequence analyses. Here, we introduce recent updates of RU, focusing on technical issues concerning the submission and updating of Repbase entries and will give short examples of using RU data. RU sincerely invites a broader submission of repeat sequences from the research community.

1,997 citations


Journal ArticleDOI
06 May 2016-Science
TL;DR: In mouse models, the complement-dependent pathway and microglia that prune excess synapses in development are inappropriately activated and mediate synapse loss in AD, which is an early feature of Alzheimer's disease and correlates with cognitive decline.
Abstract: Synapse loss in Alzheimer’s disease (AD) correlates with cognitive decline. Involvement of microglia and complement in AD has been attributed to neuroinflammation, prominent late in disease. Here we show in mouse models that complement and microglia mediate synaptic loss early in AD. C1q, the initiating protein of the classical complement cascade, is increased and associated with synapses before overt plaque deposition. Inhibition of C1q, C3, or the microglial complement receptor CR3 reduces the number of phagocytic microglia, as well as the extent of early synapse loss. C1q is necessary for the toxic effects of soluble β-amyloid (Aβ) oligomers on synapses and hippocampal long-term potentiation. Finally, microglia in adult brains engulf synaptic material in a CR3-dependent process when exposed to soluble Aβ oligomers. Together, these findings suggest that the complement-dependent pathway and microglia that prune excess synapses in development are inappropriately activated and mediate synapse loss in AD.

1,997 citations


Proceedings ArticleDOI
21 Jul 2017
TL;DR: This work revisit the core DCF formulation and introduces a factorized convolution operator, which drastically reduces the number of parameters in the model, and a compact generative model of the training sample distribution that significantly reduces memory and time complexity, while providing better diversity of samples.
Abstract: In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance. We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model, (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples, (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0% relative gain in Expected Average Overlap compared to the top ranked method [12] in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0% AUC on OTB-2015.

1,993 citations


Journal ArticleDOI
TL;DR: Given the growing trend on the application of ML methods in cancer research, this work presents here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.
Abstract: Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.

1,991 citations


Book ChapterDOI
Yulun Zhang1, Kunpeng Li1, Kai Li1, Lichen Wang1, Bineng Zhong1, Yun Fu1 
08 Sep 2018
TL;DR: Very deep residual channel attention networks (RCAN) as mentioned in this paper proposes a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections Each residual group contains some residual blocks with short skip connections.
Abstract: Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR) However, we observe that deeper networks for image SR are more difficult to train The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs To solve these problems, we propose the very deep residual channel attention networks (RCAN) Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections Each residual group contains some residual blocks with short skip connections Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods

1,991 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: Cross Stage Partial Network (CSPNet) as discussed by the authors integrates feature maps from the beginning and the end of a network stage to mitigate the problem of duplicate gradient information within network optimization.
Abstract: Neural networks have enabled state-of-the-art approaches to achieve incredible results on computer vision tasks such as object detection. However, such success greatly relies on costly computation resources, which hinders people with cheap devices from appreciating the advanced technology. In this paper, we propose Cross Stage Partial Network (CSPNet) to mitigate the problem that previous works require heavy inference computations from the network architecture perspective. We attribute the problem to the duplicate gradient information within network optimization. The proposed networks respect the variability of the gradients by integrating feature maps from the beginning and the end of a network stage, which, in our experiments, reduces computations by 20% with equivalent or even superior accuracy on the ImageNet dataset, and significantly outperforms state-of-the-art approaches in terms of AP 50 on the MS COCO object detection dataset. The CSPNet is easy to implement and general enough to cope with architectures based on ResNet, ResNeXt, and DenseNet.

1,991 citations


Journal ArticleDOI
TL;DR: Virtual adversarial training (VAT) as discussed by the authors is a regularization method based on virtual adversarial loss, which is a measure of local smoothness of the conditional label distribution given input.
Abstract: We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only “virtually” adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.

1,991 citations


Journal ArticleDOI
27 Apr 2018-Science
TL;DR: It is shown that RNA-guided DNA binding unleashes indiscriminate single-stranded DNA cleavage activity by Cas12a that completely degrades ssDNA molecules, which is also a property of other type V CRISPR-Cas12 enzymes.
Abstract: CRISPR-Cas12a (Cpf1) proteins are RNA-guided enzymes that bind and cut DNA as components of bacterial adaptive immune systems Like CRISPR-Cas9, Cas12a has been harnessed for genome editing on the basis of its ability to generate targeted, double-stranded DNA breaks Here we show that RNA-guided DNA binding unleashes indiscriminate single-stranded DNA (ssDNA) cleavage activity by Cas12a that completely degrades ssDNA molecules We find that target-activated, nonspecific single-stranded deoxyribonuclease (ssDNase) cleavage is also a property of other type V CRISPR-Cas12 enzymes By combining Cas12a ssDNase activation with isothermal amplification, we create a method termed DNA endonuclease-targeted CRISPR trans reporter (DETECTR), which achieves attomolar sensitivity for DNA detection DETECTR enables rapid and specific detection of human papillomavirus in patient samples, thereby providing a simple platform for molecular diagnostics

Journal ArticleDOI
TL;DR: A historical archive covering the past 15 years of GO data with a consistent format and file structure for both the ontology and annotations is made available to maintain consistency with other ontologies.
Abstract: The Gene Ontology Consortium (GOC) provides the most comprehensive resource currently available for computable knowledge regarding the functions of genes and gene products. Here, we report the advances of the consortium over the past two years. The new GO-CAM annotation framework was notably improved, and we formalized the model with a computational schema to check and validate the rapidly increasing repository of 2838 GO-CAMs. In addition, we describe the impacts of several collaborations to refine GO and report a 10% increase in the number of GO annotations, a 25% increase in annotated gene products, and over 9,400 new scientific articles annotated. As the project matures, we continue our efforts to review older annotations in light of newer findings, and, to maintain consistency with other ontologies. As a result, 20 000 annotations derived from experimental data were reviewed, corresponding to 2.5% of experimental GO annotations. The website (http://geneontology.org) was redesigned for quick access to documentation, downloads and tools. To maintain an accurate resource and support traceability and reproducibility, we have made available a historical archive covering the past 15 years of GO data with a consistent format and file structure for both the ontology and annotations.

Journal ArticleDOI
TL;DR: An approach combining the analysis of signature protein families and features of the architecture of cas loci that unambiguously partitions most CRISPR–cas loci into distinct classes, types and subtypes is presented.
Abstract: The evolution of CRISPR-cas loci, which encode adaptive immune systems in archaea and bacteria, involves rapid changes, in particular numerous rearrangements of the locus architecture and horizontal transfer of complete loci or individual modules. These dynamics complicate straightforward phylogenetic classification, but here we present an approach combining the analysis of signature protein families and features of the architecture of cas loci that unambiguously partitions most CRISPR-cas loci into distinct classes, types and subtypes. The new classification retains the overall structure of the previous version but is expanded to now encompass two classes, five types and 16 subtypes. The relative stability of the classification suggests that the most prevalent variants of CRISPR-Cas systems are already known. However, the existence of rare, currently unclassifiable variants implies that additional types and subtypes remain to be characterized.

Journal ArticleDOI
TL;DR: In this paper, the effect of nanostructures on the properties of supercapacitors including specific capacitance, rate capability and cycle stability is discussed, which may serve as a guideline for the next generation of super-capacitor electrode design.
Abstract: Supercapacitors have drawn considerable attention in recent years due to their high specific power, long cycle life, and ability to bridge the power/energy gap between conventional capacitors and batteries/fuel cells. Nanostructured electrode materials have demonstrated superior electrochemical properties in producing high-performance supercapacitors. In this review article, we describe the recent progress and advances in designing nanostructured supercapacitor electrode materials based on various dimensions ranging from zero to three. We highlight the effect of nanostructures on the properties of supercapacitors including specific capacitance, rate capability and cycle stability, which may serve as a guideline for the next generation of supercapacitor electrode design.

Journal ArticleDOI
TL;DR: Molecular genetic studies have identified transduction and transcription factors that act in neurocircuitry associated with the development and maintenance of addiction that might mediate initial vulnerability, maintenance, and relapse associated with addiction.

Journal ArticleDOI
TL;DR: The current understanding of how a dysregulated immune response may cause lung immunopathology leading to deleterious clinical manifestations after pathogenic hCoV infections is reviewed.
Abstract: Human coronaviruses (hCoVs) can be divided into low pathogenic and highly pathogenic coronaviruses. The low pathogenic CoVs infect the upper respiratory tract and cause mild, cold-like respiratory illness. In contrast, highly pathogenic hCoVs such as severe acute respiratory syndrome CoV (SARS-CoV) and Middle East respiratory syndrome CoV (MERS-CoV) predominantly infect lower airways and cause fatal pneumonia. Severe pneumonia caused by pathogenic hCoVs is often associated with rapid virus replication, massive inflammatory cell infiltration and elevated pro-inflammatory cytokine/chemokine responses resulting in acute lung injury (ALI), and acute respiratory distress syndrome (ARDS). Recent studies in experimentally infected animal strongly suggest a crucial role for virus-induced immunopathological events in causing fatal pneumonia after hCoV infections. Here we review the current understanding of how a dysregulated immune response may cause lung immunopathology leading to deleterious clinical manifestations after pathogenic hCoV infections.

Journal ArticleDOI
01 Jan 2016-Science
TL;DR: In this paper, the authors used structure-guided protein engineering to improve the specificity of Streptococcus pyogenes Cas9 (SpCas9) using targeted deep sequencing and unbiased whole-genome off-target analysis to assess Cas9-mediated DNA cleavage in human cells.
Abstract: The RNA-guided endonuclease Cas9 is a versatile genome-editing tool with a broad range of applications from therapeutics to functional annotation of genes. Cas9 creates double-strand breaks (DSBs) at targeted genomic loci complementary to a short RNA guide. However, Cas9 can cleave off-target sites that are not fully complementary to the guide, which poses a major challenge for genome editing. Here, we use structure-guided protein engineering to improve the specificity of Streptococcus pyogenes Cas9 (SpCas9). Using targeted deep sequencing and unbiased whole-genome off-target analysis to assess Cas9-mediated DNA cleavage in human cells, we demonstrate that "enhanced specificity" SpCas9 (eSpCas9) variants reduce off-target effects and maintain robust on-target cleavage. Thus, eSpCas9 could be broadly useful for genome-editing applications requiring a high level of specificity.

Journal ArticleDOI
TL;DR: Progress in reducing ovarian cancer incidence and mortality can be accelerated by reducing racial disparities and furthering knowledge of etiology and tumorigenesis to facilitate strategies for prevention and early detection.
Abstract: In 2018, there will be approximately 22,240 new cases of ovarian cancer diagnosed and 14,070 ovarian cancer deaths in the United States. Herein, the American Cancer Society provides an overview of ovarian cancer occurrence based on incidence data from nationwide population-based cancer registries and mortality data from the National Center for Health Statistics. The status of early detection strategies is also reviewed. In the United States, the overall ovarian cancer incidence rate declined from 1985 (16.6 per 100,000) to 2014 (11.8 per 100,000) by 29% and the mortality rate declined between 1976 (10.0 per 100,000) and 2015 (6.7 per 100,000) by 33%. Ovarian cancer encompasses a heterogenous group of malignancies that vary in etiology, molecular biology, and numerous other characteristics. Ninety percent of ovarian cancers are epithelial, the most common being serous carcinoma, for which incidence is highest in non-Hispanic whites (NHWs) (5.2 per 100,000) and lowest in non-Hispanic blacks (NHBs) and Asians/Pacific Islanders (APIs) (3.4 per 100,000). Notably, however, APIs have the highest incidence of endometrioid and clear cell carcinomas, which occur at younger ages and help explain comparable epithelial cancer incidence for APIs and NHWs younger than 55 years. Most serous carcinomas are diagnosed at stage III (51%) or IV (29%), for which the 5-year cause-specific survival for patients diagnosed during 2007 through 2013 was 42% and 26%, respectively. For all stages of epithelial cancer combined, 5-year survival is highest in APIs (57%) and lowest in NHBs (35%), who have the lowest survival for almost every stage of diagnosis across cancer subtypes. Moreover, survival has plateaued in NHBs for decades despite increasing in NHWs, from 40% for cases diagnosed during 1992 through 1994 to 47% during 2007 through 2013. Progress in reducing ovarian cancer incidence and mortality can be accelerated by reducing racial disparities and furthering knowledge of etiology and tumorigenesis to facilitate strategies for prevention and early detection. CA Cancer J Clin 2018;68:284-296. © 2018 American Cancer Society.

Journal ArticleDOI
TL;DR: A deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance and achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark.
Abstract: Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this paper, we propose a deep cascaded multi-task framework which exploits the inherent correlation between them to boost up their performance. In particular, our framework adopts a cascaded structure with three stages of carefully designed deep convolutional networks that predict face and landmark location in a coarse-to-fine manner. In addition, in the learning process, we propose a new online hard sample mining strategy that can improve the performance automatically without manual sample selection. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging FDDB and WIDER FACE benchmark for face detection, and AFLW benchmark for face alignment, while keeps real time performance.

Proceedings Article
25 Jan 2015
TL;DR: A recurrent convolutional neural network is introduced for text classification without human-designed features to capture contextual information as far as possible when learning word representations, which may introduce considerably less noise compared to traditional window-based neural networks.
Abstract: Text classification is a foundational task in many NLP applications. Traditional text classifiers often rely on many human-designed features, such as dictionaries, knowledge bases and special tree kernels. In contrast to traditional methods, we introduce a recurrent convolutional neural network for text classification without human-designed features. In our model, we apply a recurrent structure to capture contextual information as far as possible when learning word representations, which may introduce considerably less noise compared to traditional window-based neural networks. We also employ a max-pooling layer that automatically judges which words play key roles in text classification to capture the key components in texts. We conduct experiments on four commonly used datasets. The experimental results show that the proposed method outperforms the state-of-the-art methods on several datasets, particularly on document-level datasets.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1113 moreInstitutions (117)
TL;DR: For the first time, the nature of gravitational-wave polarizations from the antenna response of the LIGO-Virgo network is tested, thus enabling a new class of phenomenological tests of gravity.
Abstract: On August 14, 2017 at 10∶30:43 UTC, the Advanced Virgo detector and the two Advanced LIGO detectors coherently observed a transient gravitational-wave signal produced by the coalescence of two stellar mass black holes, with a false-alarm rate of ≲1 in 27 000 years. The signal was observed with a three-detector network matched-filter signal-to-noise ratio of 18. The inferred masses of the initial black holes are 30.5-3.0+5.7M⊙ and 25.3-4.2+2.8M⊙ (at the 90% credible level). The luminosity distance of the source is 540-210+130 Mpc, corresponding to a redshift of z=0.11-0.04+0.03. A network of three detectors improves the sky localization of the source, reducing the area of the 90% credible region from 1160 deg2 using only the two LIGO detectors to 60 deg2 using all three detectors. For the first time, we can test the nature of gravitational-wave polarizations from the antenna response of the LIGO-Virgo network, thus enabling a new class of phenomenological tests of gravity.

Journal ArticleDOI
TL;DR: Treatment with lutetium‐177 (177Lu)–Dotatate resulted in markedly longer progression‐free survival and a significantly higher response rate than high‐dose octreotide LAR among patients with advanced midgut neuroendocrine tumors.
Abstract: BackgroundPatients with advanced midgut neuroendocrine tumors who have had disease progression during first-line somatostatin analogue therapy have limited therapeutic options. This randomized, controlled trial evaluated the efficacy and safety of lutetium-177 (177Lu)–Dotatate in patients with advanced, progressive, somatostatin-receptor–positive midgut neuroendocrine tumors. MethodsWe randomly assigned 229 patients who had well-differentiated, metastatic midgut neuroendocrine tumors to receive either 177Lu-Dotatate (116 patients) at a dose of 7.4 GBq every 8 weeks (four intravenous infusions, plus best supportive care including octreotide long-acting repeatable [LAR] administered intramuscularly at a dose of 30 mg) (177Lu-Dotatate group) or octreotide LAR alone (113 patients) administered intramuscularly at a dose of 60 mg every 4 weeks (control group). The primary end point was progression-free survival. Secondary end points included the objective response rate, overall survival, safety, and the side-ef...

Journal ArticleDOI
16 Jan 2015
TL;DR: The "Great Acceleration" graphs as mentioned in this paper, originally published in 2004 to show socio-economic and Earth System trends from 1750 to 2000, have now been updated to 2010 and the dominant feature of the socioeconomic trends is that the economic activity of the human enterprise continues to grow at a rapid rate.
Abstract: The ‘Great Acceleration’ graphs, originally published in 2004 to show socio-economic and Earth System trends from 1750 to 2000, have now been updated to 2010. In the graphs of socio-economic trends, where the data permit, the activity of the wealthy (OECD) countries, those countries with emerging economies, and the rest of the world have now been differentiated. The dominant feature of the socio-economic trends is that the economic activity of the human enterprise continues to grow at a rapid rate. However, the differentiated graphs clearly show that strong equity issues are masked by considering global aggregates only. Most of the population growth since 1950 has been in the non-OECD world but the world’s economy (GDP), and hence consumption, is still strongly dominated by the OECD world. The Earth System indicators, in general, continued their long-term, post-industrial rise, although a few, such as atmospheric methane concentration and stratospheric ozone loss, showed a slowing or apparent stabilisation over the past decade. The post-1950 acceleration in the Earth System indicators remains clear. Only beyond the mid-20th century is there clear evidence for fundamental shifts in the state and functioning of the Earth System that are beyond the range of variability of the Holocene and driven by human activities. Thus, of all the candidates for a start date for the Anthropocene, the beginning of the Great Acceleration is by far the most convincing from an Earth System science perspective.

Journal ArticleDOI
07 May 2015
TL;DR: This paper discusses aspects of recruiting subjects for economic laboratory experiments, and shows how the Online Recruitment System for Economic Experiments can help.
Abstract: This paper discusses aspects of recruiting subjects for economic laboratory experiments, and shows how the Online Recruitment System for Economic Experiments can help. The software package provides experimenters with a free, convenient, and very powerful tool to organize their experiments and sessions.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this article, a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling is introduced.
Abstract: Pixel-level labelling tasks, such as semantic segmentation, play a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning techniques for image recognition to tackle pixel-level labelling tasks. One central issue in this methodology is the limited capacity of deep learning techniques to delineate visual objects. To solve this problem, we introduce a new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling. To this end, we formulate Conditional Random Fields with Gaussian pairwise potentials and mean-field approximate inference as Recurrent Neural Networks. This network, called CRF-RNN, is then plugged in as a part of a CNN to obtain a deep network that has desirable properties of both CNNs and CRFs. Importantly, our system fully integrates CRF modelling with CNNs, making it possible to train the whole deep network end-to-end with the usual back-propagation algorithm, avoiding offline post-processing methods for object delineation. We apply the proposed method to the problem of semantic image segmentation, obtaining top results on the challenging Pascal VOC 2012 segmentation benchmark.

Journal ArticleDOI
07 Jan 2015-BMJ
TL;DR: The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used, and is best used in conjunction with the TRIPod explanation and elaboration document.
Abstract: Prediction models are developed to aid health-care providers in estimating the probability or risk that a specific disease or condition is present (diagnostic models) or that a specific event will occur in the future (prognostic models), to inform their decision making. However, the overwhelming evidence shows that the quality of reporting of prediction model studies is poor. Only with full and clear reporting of information on all aspects of a prediction model can risk of bias and potential usefulness of prediction models be adequately assessed. The Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Initiative developed a set of recommendations for the reporting of studies developing, validating, or updating a prediction model, whether for diagnostic or prognostic purposes. This article describes how the TRIPOD Statement was developed. An extensive list of items based on a review of the literature was created, which was reduced after a Web-based survey and revised during a 3-day meeting in June 2011 with methodologists, health-care professionals, and journal editors. The list was refined during several meetings of the steering group and in e-mail discussions with the wider group of TRIPOD contributors. The resulting TRIPOD Statement is a checklist of 22 items, deemed essential for transparent reporting of a prediction model study. The TRIPOD Statement aims to improve the transparency of the reporting of a prediction model study regardless of the study methods used. The TRIPOD Statement is best used in conjunction with the TRIPOD explanation and elaboration document. To aid the editorial process and readers of prediction model studies, it is recommended that authors include a completed checklist in their submission (also available at www.tripod-statement.org).

18 Feb 2016
TL;DR: It is shown that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech-two vastly different languages, and is competitive with the transcription of human workers when benchmarked on standard datasets.
Abstract: We show that an end-to-end deep learning approach can be used to recognize either English or Mandarin Chinese speech-two vastly different languages. Because it replaces entire pipelines of hand-engineered components with neural networks, end-to-end learning allows us to handle a diverse variety of speech including noisy environments, accents and different languages. Key to our approach is our application of HPC techniques, enabling experiments that previously took weeks to now run in days. This allows us to iterate more quickly to identify superior architectures and algorithms. As a result, in several cases, our system is competitive with the transcription of human workers when benchmarked on standard datasets. Finally, using a technique called Batch Dispatch with GPUs in the data center, we show that our system can be inexpensively deployed in an online setting, delivering low latency when serving users at scale.

Proceedings ArticleDOI
25 Apr 2017
TL;DR: In this paper, an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences is presented, which uses single-view depth and multiview pose networks with a loss based on warping nearby views to the target using the computed depth and pose.
Abstract: We present an unsupervised learning framework for the task of monocular depth and camera motion estimation from unstructured video sequences. In common with recent work [10, 14, 16], we use an end-to-end learning approach with view synthesis as the supervisory signal. In contrast to the previous work, our method is completely unsupervised, requiring only monocular video sequences for training. Our method uses single-view depth and multiview pose networks, with a loss based on warping nearby views to the target using the computed depth and pose. The networks are thus coupled by the loss during training, but can be applied independently at test time. Empirical evaluation on the KITTI dataset demonstrates the effectiveness of our approach: 1) monocular depth performs comparably with supervised methods that use either ground-truth pose or depth for training, and 2) pose estimation performs favorably compared to established SLAM systems under comparable input settings.

01 Jan 2015
TL;DR: In this article, the authors present a new definition of fractional derivative with a smooth kernel, which takes on two different representations for the temporal and spatial variable, for which it is more convenient to work with the Fourier transform.
Abstract: In the paper, we present a new definition of fractional deriva tive with a smooth kernel which takes on two different representations for the temporal and spatial variable. The first works on the time variables; thus it is suitable to use th e Laplace transform. The second definition is related to the spatial va riables, by a non-local fractional derivative, for which it is more convenient to work with the Fourier transform. The interest for this new approach with a regular kernel was born from the prospect that there is a class of non-local systems, which have the ability to descri be the material heterogeneities and the fluctuations of diff erent scales, which cannot be well described by classical local theories or by fractional models with singular kernel.

Journal ArticleDOI
Peter H. Sudmant1, Tobias Rausch, Eugene J. Gardner2, Robert E. Handsaker3, Robert E. Handsaker4, Alexej Abyzov5, John Huddleston1, Yan Zhang6, Kai Ye7, Goo Jun8, Goo Jun9, Markus His Yang Fritz, Miriam K. Konkel10, Ankit Malhotra, Adrian M. Stütz, Xinghua Shi11, Francesco Paolo Casale12, Jieming Chen6, Fereydoun Hormozdiari1, Gargi Dayama8, Ken Chen13, Maika Malig1, Mark Chaisson1, Klaudia Walter12, Sascha Meiers, Seva Kashin4, Seva Kashin3, Erik Garrison14, Adam Auton15, Hugo Y. K. Lam, Xinmeng Jasmine Mu3, Xinmeng Jasmine Mu6, Can Alkan16, Danny Antaki17, Taejeong Bae5, Eliza Cerveira, Peter S. Chines18, Zechen Chong13, Laura Clarke12, Elif Dal16, Li Ding7, S. Emery8, Xian Fan13, Madhusudan Gujral17, Fatma Kahveci16, Jeffrey M. Kidd8, Yu Kong15, Eric-Wubbo Lameijer19, Shane A. McCarthy12, Paul Flicek12, Richard A. Gibbs20, Gabor T. Marth14, Christopher E. Mason21, Androniki Menelaou22, Androniki Menelaou23, Donna M. Muzny24, Bradley J. Nelson1, Amina Noor17, Nicholas F. Parrish25, Matthew Pendleton24, Andrew Quitadamo11, Benjamin Raeder, Eric E. Schadt24, Mallory Romanovitch, Andreas Schlattl, Robert Sebra24, Andrey A. Shabalin26, Andreas Untergasser27, Jerilyn A. Walker10, Min Wang20, Fuli Yu20, Chengsheng Zhang, Jing Zhang6, Xiangqun Zheng-Bradley12, Wanding Zhou13, Thomas Zichner, Jonathan Sebat17, Mark A. Batzer10, Steven A. McCarroll3, Steven A. McCarroll4, Ryan E. Mills8, Mark Gerstein6, Ali Bashir24, Oliver Stegle12, Scott E. Devine2, Charles Lee28, Evan E. Eichler1, Jan O. Korbel12 
01 Oct 2015-Nature
TL;DR: In this paper, the authors describe an integrated set of eight structural variant classes comprising both balanced and unbalanced variants, which are constructed using short-read DNA sequencing data and statistically phased onto haplotype blocks in 26 human populations.
Abstract: Structural variants are implicated in numerous diseases and make up the majority of varying nucleotides among human genomes. Here we describe an integrated set of eight structural variant classes comprising both balanced and unbalanced variants, which we constructed using short-read DNA sequencing data and statistically phased onto haplotype blocks in 26 human populations. Analysing this set, we identify numerous gene-intersecting structural variants exhibiting population stratification and describe naturally occurring homozygous gene knockouts that suggest the dispensability of a variety of human genes. We demonstrate that structural variants are enriched on haplotypes identified by genome-wide association studies and exhibit enrichment for expression quantitative trait loci. Additionally, we uncover appreciable levels of structural variant complexity at different scales, including genic loci subject to clusters of repeated rearrangement and complex structural variants with multiple breakpoints likely to have formed through individual mutational events. Our catalogue will enhance future studies into structural variant demography, functional impact and disease association.