scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this paper, the main anthropogenic sources of polycyclic aromatic hydrocarbons (PAHs) and their effect on the concentrations of these compounds in air are discussed.

2,217 citations


Proceedings Article
06 Jul 2015
TL;DR: In this paper, an encoder LSTM is used to map an input video sequence into a fixed length representation, which is then decoded using single or multiple decoder Long Short Term Memory (LSTM) networks to perform different tasks.
Abstract: We use Long Short Term Memory (LSTM) networks to learn representations of video sequences. Our model uses an encoder LSTM to map an input sequence into a fixed length representation. This representation is decoded using single or multiple decoder LSTMs to perform different tasks, such as reconstructing the input sequence, or predicting the future sequence. We experiment with two kinds of input sequences - patches of image pixels and high-level representations ("percepts") of video frames extracted using a pretrained convolutional net. We explore different design choices such as whether the decoder LSTMs should condition on the generated output. We analyze the outputs of the model qualitatively to see how well the model can extrapolate the learned video representation into the future and into the past. We further evaluate the representations by finetuning them for a supervised learning problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show that the representations help improve classification accuracy, especially when there are only few training examples. Even models pretrained on unrelated datasets (300 hours of YouTube videos) can help action recognition performance.

2,217 citations


Journal ArticleDOI
TL;DR: In this paper, a sharp bound on the rate of growth of chaos in thermal quantum systems with a large number of degrees of freedom is given, based on plausible physical assumptions, establishing this conjecture.
Abstract: We conjecture a sharp bound on the rate of growth of chaos in thermal quantum systems with a large number of degrees of freedom. Chaos can be diagnosed using an out-of-time-order correlation function closely related to the commutator of operators separated in time. We conjecture that the influence of chaos on this correlator can develop no faster than exponentially, with Lyapunov exponent λ L ≤ 2πk B T/ℏ. We give a precise mathematical argument, based on plausible physical assumptions, establishing this conjecture.

2,216 citations


Journal ArticleDOI
TL;DR: The initiation of antiretroviral therapy in HIV-positive adults with a CD4+ count of more than 500 cells per cubic millimeter provided net benefits over starting such therapy in patients after the CD4+, but the risks of unscheduled hospital admissions were similar in the two groups.
Abstract: BACKGROUND Data from randomized trials are lacking on the benefits and risks of initiating antiretroviral therapy in patients with asymptomatic human immunodeficiency virus (HIV) infection who have a CD4+ count of more than 350 cells per cubic millimeter. METHODS We randomly assigned HIV-positive adults who had a CD4+ count of more than 500 cells per cubic millimeter to start antiretroviral therapy immediately (immediate-initiation group) or to defer it until the CD4+ count decreased to 350 cells per cubic millimeter or until the development of the acquired immunodeficiency syndrome (AIDS) or another condition that dictated the use of antiretroviral therapy (deferred-initiation group). The primary composite end point was any serious AIDS-related event, serious non–AIDS-related event, or death from any cause. RESULTS A total of 4685 patients were followed for a mean of 3.0 years. At study entry, the median HIV viral load was 12,759 copies per milliliter, and the median CD4+ count was 651 cells per cubic millimeter. On May 15, 2015, on the basis of an interim analysis, the data and safety monitoring board determined that the study question had been answered and recommended that patients in the deferred-initiation group be offered antiretroviral therapy. The primary end point occurred in 42 patients in the immediate-initiation group (1.8%; 0.60 events per 100 personyears), as compared with 96 patients in the deferred-initiation group (4.1%; 1.38 events per 100 person-years), for a hazard ratio of 0.43 (95% confidence interval [CI], 0.30 to 0.62; P<0.001). Hazard ratios for serious AIDS-related and serious non–AIDS-related events were 0.28 (95% CI, 0.15 to 0.50; P<0.001) and 0.61 (95% CI, 0.38 to 0.97; P = 0.04), respectively. More than two thirds of the primary end points (68%) occurred in patients with a CD4+ count of more than 500 cells per cubic millimeter. The risks of a grade 4 event were similar in the two groups, as were the risks of unscheduled hospital admissions. CONCLUSIONS The initiation of antiretroviral therapy in HIV-positive adults with a CD4+ count of more than 500 cells per cubic millimeter provided net benefits over starting such therapy in patients after the CD4+ count had declined to 350 cells per cubic millimeter. (Funded by the National Institute of Allergy and Infectious Diseases and others; START ClinicalTrials.gov number, NCT00867048.)

2,215 citations


Journal ArticleDOI
TL;DR: In this paper, the authors systematically searched 15 citation databases for population-level estimates of sepsis incidence rates and fatality in adult populations using consensus criteria and published in the last 36 years.
Abstract: Rationale: Reducing the global burden of sepsis, a recognized global health challenge, requires comprehensive data on the incidence and mortality on a global scale.Objectives: To estimate the worldwide incidence and mortality of sepsis and identify knowledge gaps based on available evidence from observational studies.Methods: We systematically searched 15 international citation databases for population-level estimates of sepsis incidence rates and fatality in adult populations using consensus criteria and published in the last 36 years.Measurements and Main Results: The search yielded 1,553 reports from 1979 to 2015, of which 45 met our criteria. A total of 27 studies from seven high-income countries provided data for metaanalysis. For these countries, the population incidence rate was 288 (95% confidence interval [CI], 215–386; τ = 0.55) for hospital-treated sepsis cases and 148 (95% CI, 98–226; τ = 0.99) for hospital-treated severe sepsis cases per 100,000 person-years. Restricted to the last decade, th...

2,212 citations


Posted Content
TL;DR: The experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results, and are negative on the common belief that sparsity is the key of good performance in ReLU.
Abstract: In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble.

2,211 citations


Journal ArticleDOI
TL;DR: The Prostate Imaging - Reporting and Data System Version 2 (PI-RADS™ v2) simplifies and standardizes terminology and content of reports, and provides assessment categories that summarize levels of suspicion or risk of clinically significant prostate cancer that can be used to assist selection of patients for biopsies and management.

2,210 citations


Proceedings ArticleDOI
07 Jun 2015
TL;DR: This paper proposes an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA), and presents a practical computation method for XQDA.
Abstract: Person re-identification is an important technique towards automatic search of a person's presence in a surveillance video. Two fundamental problems are critical for person re-identification, feature representation and metric learning. An effective feature representation should be robust to illumination and viewpoint changes, and a discriminant metric should be learned to match various person images. In this paper, we propose an effective feature representation called Local Maximal Occurrence (LOMO), and a subspace and metric learning method called Cross-view Quadratic Discriminant Analysis (XQDA). The LOMO feature analyzes the horizontal occurrence of local features, and maximizes the occurrence to make a stable representation against viewpoint changes. Besides, to handle illumination variations, we apply the Retinex transform and a scale invariant texture operator. To learn a discriminant metric, we propose to learn a discriminant low dimensional subspace by cross-view quadratic discriminant analysis, and simultaneously, a QDA metric is learned on the derived subspace. We also present a practical computation method for XQDA, as well as its regularization. Experiments on four challenging person re-identification databases, VIPeR, QMUL GRID, CUHK Campus, and CUHK03, show that the proposed method improves the state-of-the-art rank-1 identification rates by 2.2%, 4.88%, 28.91%, and 31.55% on the four databases, respectively.

2,209 citations


Journal ArticleDOI
TL;DR: In this article, the authors used regression models to calculate estimates of national incidence and total number of infections, first recurrences, and deaths within 30 days after the diagnosis of C. difficile infection.
Abstract: Background The magnitude and scope of Clostridium difficile infection in the United States continue to evolve. Methods In 2011, we performed active population- and laboratory-based surveillance across 10 geographic areas in the United States to identify cases of C. difficile infection (stool specimens positive for C. difficile on either toxin or molecular assay in residents ≥1 year of age). Cases were classified as community-associated or health care–associated. In a sample of cases of C. difficile infection, specimens were cultured and isolates underwent molecular typing. We used regression models to calculate estimates of national incidence and total number of infections, first recurrences, and deaths within 30 days after the diagnosis of C. difficile infection. Results A total of 15,461 cases of C. difficile infection were identified in the 10 geographic areas; 65.8% were health care–associated, but only 24.2% had onset during hospitalization. After adjustment for predictors of disease incidence, the estimated number of incident C. difficile infections in the United States was 453,000 (95% confidence interval [CI], 397,100 to 508,500). The incidence was estimated to be higher among females (rate ratio, 1.26; 95% CI, 1.25 to 1.27), whites (rate ratio, 1.72; 95% CI, 1.56 to 2.0), and persons 65 years of age or older (rate ratio, 8.65; 95% CI, 8.16 to 9.31). The estimated number of first recurrences of C. difficile infection was 83,000 (95% CI, 57,000 to 108,900), and the estimated number of deaths was 29,300 (95% CI, 16,500 to 42,100). The North American pulsed-field gel electrophoresis type 1 (NAP1) strain was more prevalent among health care–associated infections than among community-associated infections (30.7% vs. 18.8%, P<0.001) Conclusions C. difficile was responsible for almost half a million infections and was associated with approximately 29,000 deaths in 2011. (Funded by the Centers for Disease Control and Prevention.)

2,209 citations


Journal ArticleDOI
TL;DR: The need for surgical services in low- and middleincome countries will continue to rise substantially from now until 2030, with a large projected increase in the incidence of cancer, road traffic injuries, and cardiovascular and metabolic diseases in LMICs.

2,209 citations


Proceedings Article
01 Jan 2016
TL;DR: Deep convolutional generative adversarial networks (DCGANs) as discussed by the authors learn a hierarchy of representations from object parts to scenes in both the generator and discriminator for unsupervised learning.
Abstract: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.

Journal ArticleDOI
TL;DR: Preliminary data show that tocilizumab, which improved the clinical outcome immediately in severe and critical COVID-19 patients, is an effective treatment to reduce mortality.
Abstract: After analyzing the immune characteristics of patients with severe coronavirus disease 2019 (COVID-19), we have identified that pathogenic T cells and inflammatory monocytes with large amount of interleukin 6 secreting may incite the inflammatory storm, which may potentially be curbed through monoclonal antibody that targets the IL-6 pathways. Here, we aimed to assess the efficacy of tocilizumab in severe patients with COVID-19 and seek a therapeutic strategy. The patients diagnosed as severe or critical COVID-19 in The First Affiliated Hospital of University of Science and Technology of China (Anhui Provincial Hospital) and Anhui Fuyang Second People’s Hospital were given tocilizumab in addition to routine therapy between 5 and 14 February 2020. The changes of clinical manifestations, computerized tomography (CT) scan image, and laboratory examinations were retrospectively analyzed. Fever returned to normal on the first day, and other symptoms improved remarkably within a few days. Within 5 d after tocilizumab, 15 of the 20 patients (75.0%) had lowered their oxygen intake, and 1 patient needed no oxygen therapy. CT scans manifested that the lung lesion opacity absorbed in 19 patients (90.5%). The percentage of lymphocytes in peripheral blood, which decreased in 85.0% of patients (17/20) before treatment (mean, 15.52 ± 8.89%), returned to normal in 52.6% of patients (10/19) on the fifth day after treatment. Abnormally elevated C-reactive protein decreased significantly in 84.2% of patients (16/19). No obvious adverse reactions were observed. All patients have been discharged on average 15.1 d after giving tocilizumab. Preliminary data show that tocilizumab, which improved the clinical outcome immediately in severe and critical COVID-19 patients, is an effective treatment to reduce mortality.

Journal ArticleDOI
14 Apr 2020-Science
TL;DR: Using existing data to build a deterministic model of multiyear interactions between existing coronaviruses, with a focus on the United States, is used to project the potential epidemic dynamics and pressures on critical care capacity over the next 5 years and projected that recurrent wintertime outbreaks of SARS-CoV-2 will probably occur after the initial, most severe pandemic wave.
Abstract: It is urgent to understand the future of severe acute respiratory syndrome-coronavirus 2 (SARS-CoV-2) transmission. We used estimates of seasonality, immunity, and cross-immunity for human coronavirus OC43 (HCoV-OC43) and HCoV-HKU1 using time-series data from the United States to inform a model of SARS-CoV-2 transmission. We projected that recurrent wintertime outbreaks of SARS-CoV-2 will probably occur after the initial, most severe pandemic wave. Absent other interventions, a key metric for the success of social distancing is whether critical care capacities are exceeded. To avoid this, prolonged or intermittent social distancing may be necessary into 2022. Additional interventions, including expanded critical care capacity and an effective therapeutic, would improve the success of intermittent distancing and hasten the acquisition of herd immunity. Longitudinal serological studies are urgently needed to determine the extent and duration of immunity to SARS-CoV-2. Even in the event of apparent elimination, SARS-CoV-2 surveillance should be maintained because a resurgence in contagion could be possible as late as 2024.

Posted Content
TL;DR: This article seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks, particularly in deep neural networks.
Abstract: Multi-task learning (MTL) has led to successes in many applications of machine learning, from natural language processing and speech recognition to computer vision and drug discovery. This article aims to give a general overview of MTL, particularly in deep neural networks. It introduces the two most common methods for MTL in Deep Learning, gives an overview of the literature, and discusses recent advances. In particular, it seeks to help ML practitioners apply MTL by shedding light on how MTL works and providing guidelines for choosing appropriate auxiliary tasks.

Journal ArticleDOI
TL;DR: An analysis of global forest cover is conducted to reveal that 70% of remaining forest is within 1 km of the forest’s edge, subject to the degrading effects of fragmentation, indicating an urgent need for conservation and restoration measures to improve landscape connectivity.
Abstract: We conducted an analysis of global forest cover to reveal that 70% of remaining forest is within 1 km of the forest’s edge, subject to the degrading effects of fragmentation. A synthesis of fragmentation experiments spanning multiple biomes and scales, five continents, and 35 year sd emonstrates that habitatfragmentation reduces biodiversity by 13 to 75% and impairs key ecosystem functions by decreasing biomass and altering nutrient cycles. Effects are greatest in the smallest and most isolated fragments, and they magnify with the passage of time. These findings indicate an urgent need for conservation and restoration measures to improve landscape connectivity, which will reduce extinction rates and help maintain ecosystem services.

Journal ArticleDOI
TL;DR: Inflammation is a biological response of the immune system that can be triggered by a variety of factors, including pathogens, damaged cells and toxic compounds, potentially leading to tissue damage or disease.
Abstract: Inflammation is a biological response of the immune system that can be triggered by a variety of factors, including pathogens, damaged cells and toxic compounds. These factors may induce acute and/or chronic inflammatory responses in the heart, pancreas, liver, kidney, lung, brain, intestinal tract and reproductive system, potentially leading to tissue damage or disease. Both infectious and non-infectious agents and cell damage activate inflammatory cells and trigger inflammatory signaling pathways, most commonly the NF-κB, MAPK, and JAK-STAT pathways. Here, we review inflammatory responses within organs, focusing on the etiology of inflammation, inflammatory response mechanisms, resolution of inflammation, and organ-specific inflammatory responses.

Proceedings ArticleDOI
07 Dec 2015
TL;DR: In this article, a CNN architecture is proposed to combine information from multiple views of a 3D shape into a single and compact shape descriptor, which can be applied to accurately recognize human hand-drawn sketches of shapes.
Abstract: A longstanding question in computer vision concerns the representation of 3D shapes for recognition: should 3D shapes be represented with descriptors operating on their native 3D formats, such as voxel grid or polygon mesh, or can they be effectively represented with view-based descriptors? We address this question in the context of learning to recognize 3D shapes from a collection of their rendered views on 2D images We first present a standard CNN architecture trained to recognize the shapes' rendered views independently of each other, and show that a 3D shape can be recognized even from a single view at an accuracy far higher than using state-of-the-art 3D shape descriptors Recognition rates further increase when multiple views of the shapes are provided In addition, we present a novel CNN architecture that combines information from multiple views of a 3D shape into a single and compact shape descriptor offering even better recognition performance The same architecture can be applied to accurately recognize human hand-drawn sketches of shapes We conclude that a collection of 2D views can be highly informative for 3D shape recognition and is amenable to emerging CNN architectures and their derivatives

Posted Content
Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul F. Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matthew M. Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Chris Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang 
TL;DR: The performance of Theano is compared against Torch7 and TensorFlow on several machine learning models and recently-introduced functionalities and improvements are discussed.
Abstract: Theano is a Python library that allows to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Since its introduction, it has been one of the most used CPU and GPU mathematical compilers - especially in the machine learning community - and has shown steady performance improvements. Theano is being actively and continuously developed since 2008, multiple frameworks have been built on top of it and it has been used to produce many state-of-the-art machine learning models. The present article is structured as follows. Section I provides an overview of the Theano software and its community. Section II presents the principal features of Theano and how to use them, and compares them with other similar projects. Section III focuses on recently-introduced functionalities and improvements. Section IV compares the performance of Theano against Torch7 and TensorFlow on several machine learning models. Section V discusses current limitations of Theano and potential ways of improving it.

Journal ArticleDOI
TL;DR: In this paper, the authors present an open-source implementation of structural equation models (SEM), a form of path analysis that resolves complex multivariate relationships among a suite of interrelated variables.
Abstract: Summary Ecologists and evolutionary biologists rely on an increasingly sophisticated set of statistical tools to describe complex natural systems. One such tool that has gained significant traction in the biological sciences is structural equation models (SEM), a form of path analysis that resolves complex multivariate relationships among a suite of interrelated variables. Evaluation of SEMs has historically relied on covariances among variables, rather than the values of the data points themselves. While this approach permits a wide variety of model forms, it limits the incorporation of detailed specifications. Recent developments have allowed for the simultaneous implementation of non-normal distributions, random effects and different correlation structures using local estimation, but this process is not yet automated and consequently, evaluation can be prohibitive with complex models. Here, I present a fully documented, open-source package piecewiseSEM, a practical implementation of confirmatory path analysis for the r programming language. The package extends this method to all current (generalized) linear, (phylogenetic) least-square, and mixed effects models, relying on familiar r syntax. I also provide two worked examples: one involving random effects and temporal autocorrelation, and a second involving phylogenetically independent contrasts. My goal is to provide a user-friendly and tractable implementation of SEM that also reflects the ecological and methodological processes generating data.

Journal ArticleDOI
TL;DR: The preferred reporting items for systematic reviews and meta-analyses (PRISMA) statement as discussed by the authors was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found.
Abstract: The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.

Journal ArticleDOI
08 May 2019
TL;DR: This paper provides an overview of research and development activities in the field of autonomous agents and multi-agent systems and aims to identify key concepts and applications, and to indicate how they relate to one-another.
Abstract: Model-based Bayesian Reinforcement Learning (BRL) provides a principled solution to dealing with the exploration-exploitation trade-off, but such methods typically assume a fully observable environments. The few Bayesian RL methods that are applicable in partially observable domains, such as the Bayes-Adaptive POMDP (BA-POMDP), scale poorly. To address this issue, we introduce the Factored BA-POMDP model (FBA-POMDP), a framework that is able to learn a compact model of the dynamics by exploiting the underlying structure of a POMDP. The FBA-POMDP framework casts the problem as a planning task, for which we adapt the Monte-Carlo Tree Search planning algorithm and develop a belief tracking method to approximate the joint posterior over the state and model variables. Our empirical results show that this method outperforms a number of BRL baselines and is able to learn efficiently when the factorization is known, as well as learn both the factorization and the model parameters simultaneously.

Journal ArticleDOI
TL;DR: A perspective on the basic concepts of convolutional neural network and its application to various radiological tasks is offered, and its challenges and future directions in the field of radiology are discussed.
Abstract: Convolutional neural network (CNN), a class of artificial neural networks that has become dominant in various computer vision tasks, is attracting interest across a variety of domains, including radiology. CNN is designed to automatically and adaptively learn spatial hierarchies of features through backpropagation by using multiple building blocks, such as convolution layers, pooling layers, and fully connected layers. This review article offers a perspective on the basic concepts of CNN and its application to various radiological tasks, and discusses its challenges and future directions in the field of radiology. Two challenges in applying CNN to radiological tasks, small dataset and overfitting, will also be covered in this article, as well as techniques to minimize them. Being familiar with the concepts and advantages, as well as limitations, of CNN is essential to leverage its potential in diagnostic radiology, with the goal of augmenting the performance of radiologists and improving patient care. • Convolutional neural network is a class of deep learning methods which has become dominant in various computer vision tasks and is attracting interest across a variety of domains, including radiology. • Convolutional neural network is composed of multiple building blocks, such as convolution layers, pooling layers, and fully connected layers, and is designed to automatically and adaptively learn spatial hierarchies of features through a backpropagation algorithm. • Familiarity with the concepts and advantages, as well as limitations, of convolutional neural network is essential to leverage its potential to improve radiologist performance and, eventually, patient care.

Journal ArticleDOI
TL;DR: Authors/Task Force Members: Massimo F. Piepoli (Chairperson), Arno W. Hoes (Co-Chairperson) (The Netherlands), Stefan Agewall (Norway) 1, Christian Albus (Germany)9, Carlos Brotons (Spain)10, Alberico L. Catapano (Italy)3, Marie-Therese Cooney (Ireland)1, Ugo Corrà (Italy).
Abstract: Authors/Task Force Members: Massimo F. Piepoli* (Chairperson) (Italy), Arno W. Hoes* (Co-Chairperson) (The Netherlands), Stefan Agewall (Norway)1, Christian Albus (Germany)9, Carlos Brotons (Spain)10, Alberico L. Catapano (Italy)3, Marie-Therese Cooney (Ireland)1, Ugo Corrà (Italy)1, Bernard Cosyns (Belgium)1, Christi Deaton (UK)1, Ian Graham (Ireland)1, Michael Stephen Hall (UK)7, F. D. Richard Hobbs (UK)10, Maja-Lisa Løchen (Norway)1, Herbert Löllgen (Germany)8, Pedro Marques-Vidal (Switzerland)1, Joep Perk (Sweden)1, Eva Prescott (Denmark)1, Josep Redon (Spain)5, Dimitrios J. Richter (Greece)1, Naveed Sattar (UK)2, Yvo Smulders (The Netherlands)1, Monica Tiberi (Italy)1, H. Bart van der Worp (The Netherlands)6, Ineke van Dis (The Netherlands)4, W. M. Monique Verschuren (The Netherlands)1

Journal ArticleDOI
TL;DR: In this article, the authors provide a guided tour through the development of artificial self-propelling microparticles and nanoparticles and their application to the study of nonequilibrium phenomena, as well as the open challenges that the field is currently facing.
Abstract: Differently from passive Brownian particles, active particles, also known as self-propelled Brownian particles or microswimmers and nanoswimmers, are capable of taking up energy from their environment and converting it into directed motion. Because of this constant flow of energy, their behavior can be explained and understood only within the framework of nonequilibrium physics. In the biological realm, many cells perform directed motion, for example, as a way to browse for nutrients or to avoid toxins. Inspired by these motile microorganisms, researchers have been developing artificial particles that feature similar swimming behaviors based on different mechanisms. These man-made micromachines and nanomachines hold a great potential as autonomous agents for health care, sustainability, and security applications. With a focus on the basic physical features of the interactions of self-propelled Brownian particles with a crowded and complex environment, this comprehensive review will provide a guided tour through its basic principles, the development of artificial self-propelling microparticles and nanoparticles, and their application to the study of nonequilibrium phenomena, as well as the open challenges that the field is currently facing.

Proceedings Article
20 Jun 2020
TL;DR: It is shown for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler.
Abstract: We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.

Journal ArticleDOI
29 Jan 2015-Nature
TL;DR: Structural-guided engineering of a CRISPR-Cas9 complex to mediate efficient transcriptional activation at endogenous genomic loci is described and the potential of Cas9-based activators as a powerful genetic perturbation technology is demonstrated.
Abstract: Systematic interrogation of gene function requires the ability to perturb gene expression in a robust and generalizable manner. Here we describe structure-guided engineering of a CRISPR-Cas9 complex to mediate efficient transcriptional activation at endogenous genomic loci. We used these engineered Cas9 activation complexes to investigate single-guide RNA (sgRNA) targeting rules for effective transcriptional activation, to demonstrate multiplexed activation of ten genes simultaneously, and to upregulate long intergenic non-coding RNA (lincRNA) transcripts. We also synthesized a library consisting of 70,290 guides targeting all human RefSeq coding isoforms to screen for genes that, upon activation, confer resistance to a BRAF inhibitor. The top hits included genes previously shown to be able to confer resistance, and novel candidates were validated using individual sgRNA and complementary DNA overexpression. A gene expression signature based on the top screening hits correlated with markers of BRAF inhibitor resistance in cell lines and patient-derived samples. These results collectively demonstrate the potential of Cas9-based activators as a powerful genetic perturbation technology.

Journal ArticleDOI
Fei Xiao1, Meiwen Tang1, Xiaobin Zheng1, Ye Liu1, Xiaofeng Li1, Hong Shan1 
TL;DR: No abstract available Keywords: ACE2; Gastrointestinal Infection; Oral-Fecal Transmission; SARS-CoV-2.

Posted Content
TL;DR: Using MPNNs, state of the art results on an important molecular property prediction benchmark are demonstrated and it is believed future work should focus on datasets with larger molecules or more accurate ground truth labels.
Abstract: Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, and achieved remarkable performances in both lexicon free and lexicon-based scene text recognition tasks.
Abstract: Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.

Journal ArticleDOI
TL;DR: The BEAST software package unifies molecular phylogenetic reconstruction with complex discrete and continuous trait evolution, divergence-time dating, and coalescent demographic models in an efficient statistical inference engine using Markov chain Monte Carlo integration.
Abstract: The Bayesian Evolutionary Analysis by Sampling Trees (BEAST) software package has become a primary tool for Bayesian phylogenetic and phylodynamic inference from genetic sequence data. BEAST unifies molecular phylogenetic reconstruction with complex discrete and continuous trait evolution, divergence-time dating, and coalescent demographic models in an efficient statistical inference engine using Markov chain Monte Carlo integration. A convenient, cross-platform, graphical user interface allows the flexible construction of complex evolutionary analyses.