scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: Results indicate that isolation is a necessary measure to protect public health, but results indicate that it alters physical activity and eating behaviours in a health compromising direction.
Abstract: Background: Public health recommendations and governmental measures during the COVID-19 pandemic have resulted in numerous restrictions on daily living including social distancing, isolation and home confinement. While these measures are imperative to abate the spreading of COVID-19, the impact of these restrictions on health behaviours and lifestyles at home is undefined. Therefore, an international online survey was launched in April 2020, in seven languages, to elucidate the behavioural and lifestyle consequences of COVID-19 restrictions. This report presents the results from the first thousand responders on physical activity (PA) and nutrition behaviours. Methods: Following a structured review of the literature, the “Effects of home Confinement on multiple Lifestyle Behaviours during the COVID-19 outbreak (ECLB-COVID19)” Electronic survey was designed by a steering group of multidisciplinary scientists and academics. The survey was uploaded and shared on the Google online survey platform. Thirty-five research organisations from Europe, North-Africa, Western Asia and the Americas promoted the survey in English, German, French, Arabic, Spanish, Portuguese and Slovenian languages. Questions were presented in a differential format, with questions related to responses “before” and “during” confinement conditions. Results: 1047 replies (54% women) from Asia (36%), Africa (40%), Europe (21%) and other (3%) were included in the analysis. The COVID-19 home confinement had a negative effect on all PA intensity levels (vigorous, moderate, walking and overall). Additionally, daily sitting time increased from 5 to 8 h per day. Food consumption and meal patterns (the type of food, eating out of control, snacks between meals, number of main meals) were more unhealthy during confinement, with only alcohol binge drinking decreasing significantly. Conclusion: While isolation is a necessary measure to protect public health, results indicate that it alters physical activity and eating behaviours in a health compromising direction. A more detailed analysis of survey data will allow for a segregation of these responses in different age groups, countries and other subgroups, which will help develop interventions to mitigate the negative lifestyle behaviours that have manifested during the COVID-19 confinement.

1,275 citations


Proceedings Article
05 Dec 2016
TL;DR: The gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.
Abstract: This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-of-the-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.

1,275 citations


Journal ArticleDOI
16 Jan 2020-BMJ
TL;DR: The development of theSWiM guideline for the synthesis of quantitative data of intervention effects is described and the nine SWiM reporting items with accompanying explanations and examples are presented.
Abstract: In systematic reviews that lack data amenable to meta-analysis, alternative synthesis methods are commonly used, but these methods are rarely reported. This lack of transparency in the methods can cast doubt on the validity of the review findings. The Synthesis Without Meta-analysis (SWiM) guideline has been developed to guide clear reporting in reviews of interventions in which alternative synthesis methods to meta-analysis of effect estimates are used. This article describes the development of the SWiM guideline for the synthesis of quantitative data of intervention effects and presents the nine SWiM reporting items with accompanying explanations and examples.

1,275 citations


Journal ArticleDOI
TL;DR: Broad-spectrum antiviral GS-5734 inhibits both epidemic and zoonotic coronaviruses in vitro and in vivo and may prove effective against endemic MERS-CoV in the Middle East, circulating human CoV, and, possibly most importantly, emerging CoV of the future.
Abstract: Emerging viral infections are difficult to control because heterogeneous members periodically cycle in and out of humans and zoonotic hosts, complicating the development of specific antiviral therapies and vaccines. Coronaviruses (CoVs) have a proclivity to spread rapidly into new host species causing severe disease. Severe acute respiratory syndrome CoV (SARS-CoV) and Middle East respiratory syndrome CoV (MERS-CoV) successively emerged, causing severe epidemic respiratory disease in immunologically naive human populations throughout the globe. Broad-spectrum therapies capable of inhibiting CoV infections would address an immediate unmet medical need and could be invaluable in the treatment of emerging and endemic CoV infections. We show that a nucleotide prodrug, GS-5734, currently in clinical development for treatment of Ebola virus disease, can inhibit SARS-CoV and MERS-CoV replication in multiple in vitro systems, including primary human airway epithelial cell cultures with submicromolar IC50 values. GS-5734 was also effective against bat CoVs, prepandemic bat CoVs, and circulating contemporary human CoV in primary human lung cells, thus demonstrating broad-spectrum anti-CoV activity. In a mouse model of SARS-CoV pathogenesis, prophylactic and early therapeutic administration of GS-5734 significantly reduced lung viral load and improved clinical signs of disease as well as respiratory function. These data provide substantive evidence that GS-5734 may prove effective against endemic MERS-CoV in the Middle East, circulating human CoV, and, possibly most importantly, emerging CoV of the future.

1,274 citations


Journal ArticleDOI
TL;DR: In a recent special issue on digital innovation management as mentioned in this paper, the authors proposed four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world.
Abstract: Rapid and pervasive digitization of innovation processes and outcomes has upended extant theories on innovation management by calling into question fundamental assumptions about the definitional boundaries for innovation, agency for innovation, and the relationship between innovation processes and outcomes. There is a critical need for novel theorizing on digital innovation management that does not rely on such assumptions and draws on the rich and rapidly emerging research on digital technologies. We offer suggestions for such theorizing in the form of four new theorizing logics, or elements, that are likely to be valuable in constructing more accurate explanations of innovation processes and outcomes in an increasingly digital world. These logics can open new avenues for researchers to contribute to this important area. Our suggestions in this paper, coupled with the six research notes included in the special issue on digital innovation management, seek to offer a broader foundation for reinventing innovation management research in a digital world.

1,274 citations


Proceedings Article
07 Jun 2017
TL;DR: In this article, an actor-critic method was used to learn multi-agent coordination policies in cooperative and competitive multi-player RL games, where agent populations are able to discover various physical and informational coordination strategies.
Abstract: We explore deep reinforcement learning methods for multi-agent domains. We begin by analyzing the difficulty of traditional algorithms in the multi-agent case: Q-learning is challenged by an inherent non-stationarity of the environment, while policy gradient suffers from a variance that increases as the number of agents grows. We then present an adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination. Additionally, we introduce a training regimen utilizing an ensemble of policies for each agent that leads to more robust multi-agent policies. We show the strength of our approach compared to existing methods in cooperative as well as competitive scenarios, where agent populations are able to discover various physical and informational coordination strategies.

1,273 citations



Journal ArticleDOI
TL;DR: Nivolumab has clinically meaningful activity and a manageable safety profile in previously treated patients with advanced, refractory, squamous non-small cell lung cancer and these data support the assessment of nivolumsab in randomised, controlled, phase 3 studies of first-line and second-line treatment.
Abstract: Summary Background Patients with squamous non-small-cell lung cancer that is refractory to multiple treatments have poor outcomes. We assessed the activity of nivolumab, a fully human IgG4 PD-1 immune checkpoint inhibitor antibody, for patients with advanced, refractory, squamous non-small-cell lung cancer. Methods We did this phase 2, single-arm trial at 27 sites (academic, hospital, and private cancer centres) in France, Germany, Italy, and USA. Patients who had received two or more previous treatments received intravenous nivolumab (3 mg/kg) every 2 weeks until progression or unacceptable toxic effects. The primary endpoint was the proportion of patients with a confirmed objective response as assessed by an independent radiology review committee. We included all treated patients in the analyses. This study is registered with ClinicalTrials.gov, number NCT01721759. Findings Between Nov 16, 2012, and July 22, 2013, we enrolled and treated 117 patients. 17 (14·5%, 95% CI 8·7–22·2) of 117 patients had an objective response as assessed by an independent radiology review committee. Median time to response was 3·3 months (IQR 2·2–4·8), and median duration of response was not reached (95% CI 8·31–not applicable); 13 (77%) of 17 of responses were ongoing at the time of analysis. 30 (26%) of 117 patients had stable disease (median duration 6·0 months, 95% CI 4·7–10·9). 20 (17%) of 117 patients reported grade 3–4 treatment-related adverse events, including: fatigue (five [4%] of 117 patients), pneumonitis (four [3%]), and diarrhoea (three [3%]). There were two treatment-associated deaths caused by pneumonia and ischaemic stroke that occurred in patients with multiple comorbidities in the setting of progressive disease. Interpretation Nivolumab has clinically meaningful activity and a manageable safety profile in previously treated patients with advanced, refractory, squamous non-small cell lung cancer. These data support the assessment of nivolumab in randomised, controlled, phase 3 studies of first-line and second-line treatment. Funding Bristol-Myers Squibb.

1,273 citations


Proceedings Article
29 Apr 2018
TL;DR: It is shown that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers.
Abstract: Many interesting problems in machine learning are being revisited with new deep learning tools. For graph-based semi-supervised learning, a recent important development is graph convolutional networks (GCNs), which nicely integrate local vertex features and graph topology in the convolutional layers. Although the GCN model compares favorably with other state-of-the-art methods, its mechanisms are not clear and it still requires considerable amount of labeled data for validation and model selection. In this paper, we develop deeper insights into the GCN model and address its fundamental limits. First, we show that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers. Second, to overcome the limits of the GCN model with shallow architectures, we propose both co-training and self-training approaches to train GCNs. Our approaches significantly improve GCNs in learning with very few labels, and exempt them from requiring additional labels for validation. Extensive experiments on benchmarks have verified our theory and proposals.

1,273 citations


Posted Content
TL;DR: A new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes, is introduced, to train a high-quality centralized model.
Abstract: We introduce a new and increasingly relevant setting for distributed optimization in machine learning, where the data defining the optimization are unevenly distributed over an extremely large number of nodes. The goal is to train a high-quality centralized model. We refer to this setting as Federated Optimization. In this setting, communication efficiency is of the utmost importance and minimizing the number of rounds of communication is the principal goal. A motivating example arises when we keep the training data locally on users' mobile devices instead of logging it to a data center for training. In federated optimziation, the devices are used as compute nodes performing computation on their local data in order to update a global model. We suppose that we have extremely large number of devices in the network --- as many as the number of users of a given service, each of which has only a tiny fraction of the total data available. In particular, we expect the number of data points available locally to be much smaller than the number of devices. Additionally, since different users generate data with different patterns, it is reasonable to assume that no device has a representative sample of the overall distribution. We show that existing algorithms are not suitable for this setting, and propose a new algorithm which shows encouraging experimental results for sparse convex problems. This work also sets a path for future research needed in the context of \federated optimization.

1,272 citations


Posted Content
TL;DR: This paper designs a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics.
Abstract: Mobile-edge cloud computing is a new paradigm to provide cloud computing capabilities at the edge of pervasive radio access networks in close proximity to mobile users. In this paper, we first study the multi-user computation offloading problem for mobile-edge cloud computing in a multi-channel wireless interference environment. We show that it is NP-hard to compute a centralized optimal solution, and hence adopt a game theoretic approach for achieving efficient computation offloading in a distributed manner. We formulate the distributed computation offloading decision making problem among mobile device users as a multi-user computation offloading game. We analyze the structural property of the game and show that the game admits a Nash equilibrium and possesses the finite improvement property. We then design a distributed computation offloading algorithm that can achieve a Nash equilibrium, derive the upper bound of the convergence time, and quantify its efficiency ratio over the centralized optimal solutions in terms of two important performance metrics. We further extend our study to the scenario of multi-user computation offloading in the multi-channel wireless contention environment. Numerical results corroborate that the proposed algorithm can achieve superior computation offloading performance and scale well as the user size increases.

Posted Content
TL;DR: In this article, the authors propose a differentiable architecture search algorithm based on the continuous relaxation of the architecture representation. But the architecture search is not a discrete and non-differentiable search space.
Abstract: This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.

Proceedings Article
06 Jul 2015
TL;DR: Deep Adaptation Network (DAN) as mentioned in this paper embeds hidden representations of all task-specific layers in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched.
Abstract: Recent studies reveal that a deep neural network can learn transferable features which generalize well to novel tasks for domain adaptation. However, as deep features eventually transition from general to specific along the network, the feature transferability drops significantly in higher layers with increasing domain discrepancy. Hence, it is important to formally reduce the dataset bias and enhance the transferability in task-specific layers. In this paper, we propose a new Deep Adaptation Network (DAN) architecture, which generalizes deep convolutional neural network to the domain adaptation scenario. In DAN, hidden representations of all task-specific layers are embedded in a reproducing kernel Hilbert space where the mean embeddings of different domain distributions can be explicitly matched. The domain discrepancy is further reduced using an optimal multikernel selection method for mean embedding matching. DAN can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding. Extensive empirical evidence shows that the proposed architecture yields state-of-the-art image classification error rates on standard domain adaptation benchmarks.

Journal ArticleDOI
29 Apr 2016-Science
TL;DR: Deep sequencing of the gut microbiomes of 1135 participants from a Dutch population-based cohort shows relations between the microbiome and 126 exogenous and intrinsic host factors, including 31 intrinsic factors, 12 diseases, 19 drug groups, 4 smoking categories, and 60 dietary factors, and an important step toward a better understanding of environment-diet-microbe-host interactions.
Abstract: Deep sequencing of the gut microbiomes of 1135 participants from a Dutch population-based cohort shows relations between the microbiome and 126 exogenous and intrinsic host factors, including 31 intrinsic factors, 12 diseases, 19 drug groups, 4 smoking categories, and 60 dietary factors. These factors collectively explain 18.7% of the variation seen in the interindividual distance of microbial composition. We could associate 110 factors to 125 species and observed that fecal chromogranin A (CgA), a protein secreted by enteroendocrine cells, was exclusively associated with 61 microbial species whose abundance collectively accounted for 53% of microbial composition. Low CgA concentrations were seen in individuals with a more diverse microbiome. These results are an important step toward a better understanding of environment-diet-microbe-host interactions.

Proceedings Article
04 Nov 2016
TL;DR: MS MARCO as mentioned in this paper is a large scale dataset for reading comprehension and question answering, where all questions are sampled from real anonymized user queries and context passages from which answers in the dataset are derived from real web documents using the most advanced version of the Bing search engine.
Abstract: This paper presents our recent work on the design and development of a new, large scale dataset, which we name MS MARCO, for MAchine Reading COmprehension. This new dataset is aimed to overcome a number of well-known weaknesses of previous publicly available datasets for the same task of reading comprehension and question answering. In MS MARCO, all questions are sampled from real anonymized user queries. The context passages, from which answers in the dataset are derived, are extracted from real web documents using the most advanced version of the Bing search engine. The answers to the queries are human generated. Finally, a subset of these queries has multiple answers. We aim to release one million queries and the corresponding answers in the dataset, which, to the best of our knowledge, is the most comprehensive real-world dataset of its kind in both quantity and quality. We are currently releasing 100,000 queries with their corresponding answers to inspire work in reading comprehension and question answering along with gathering feedback from the research community.

Proceedings ArticleDOI
03 Nov 2017
TL;DR: An effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN is proposed, sparing the need for training substitute models and avoiding the loss in attack transferability.
Abstract: Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs. Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to efficiently attack black-box models. By exploiting zeroth order optimization, improved attacks to the targeted DNN can be accomplished, sparing the need for training substitute models and avoiding the loss in attack transferability. Experimental results on MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective as the state-of-the-art white-box attack (e.g., Carlini and Wagner's attack) and significantly outperforms existing black-box attacks via substitute models.

Journal ArticleDOI
21 Apr 2020-BMJ
TL;DR: The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.
Abstract: Objective To evaluate viral loads at different stages of disease progression in patients infected with the 2019 severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) during the first four months of the epidemic in Zhejiang province, China. Design Retrospective cohort study. Setting A designated hospital for patients with covid-19 in Zhejiang province, China. Participants 96 consecutively admitted patients with laboratory confirmed SARS-CoV-2 infection: 22 with mild disease and 74 with severe disease. Data were collected from 19 January 2020 to 20 March 2020. Main outcome measures Ribonucleic acid (RNA) viral load measured in respiratory, stool, serum, and urine samples. Cycle threshold values, a measure of nucleic acid concentration, were plotted onto the standard curve constructed on the basis of the standard product. Epidemiological, clinical, and laboratory characteristics and treatment and outcomes data were obtained through data collection forms from electronic medical records, and the relation between clinical data and disease severity was analysed. Results 3497 respiratory, stool, serum, and urine samples were collected from patients after admission and evaluated for SARS-CoV-2 RNA viral load. Infection was confirmed in all patients by testing sputum and saliva samples. RNA was detected in the stool of 55 (59%) patients and in the serum of 39 (41%) patients. The urine sample from one patient was positive for SARS-CoV-2. The median duration of virus in stool (22 days, interquartile range 17-31 days) was significantly longer than in respiratory (18 days, 13-29 days; P=0.02) and serum samples (16 days, 11-21 days; P Conclusion The duration of SARS-CoV-2 is significantly longer in stool samples than in respiratory and serum samples, highlighting the need to strengthen the management of stool samples in the prevention and control of the epidemic, and the virus persists longer with higher load and peaks later in the respiratory tissue of patients with severe disease.

Journal ArticleDOI
TL;DR: The analysis of recent advances in genetic algorithms is discussed and the well-known algorithms and their implementation are presented with their pros and cons with the aim of facilitating new researchers.
Abstract: In this paper, the analysis of recent advances in genetic algorithms is discussed. The genetic algorithms of great interest in research community are selected for analysis. This review will help the new and demanding researchers to provide the wider vision of genetic algorithms. The well-known algorithms and their implementation are presented with their pros and cons. The genetic operators and their usages are discussed with the aim of facilitating new researchers. The different research domains involved in genetic algorithms are covered. The future research directions in the area of genetic operators, fitness function and hybrid algorithms are discussed. This structured review will be helpful for research and graduate teaching.

Journal ArticleDOI
01 Jan 2017
TL;DR: AGILE as discussed by the authors is an ASI space mission developed with programmatic support by INAF and INFN, which includes data gathered with the 1 meter Swope and 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile.
Abstract: This program was supported by the the Kavli Foundation, Danish National Research Foundation, the Niels Bohr International Academy, and the DARK Cosmology Centre. The UCSC group is supported in part by NSF grant AST-1518052, the Gordon & Betty Moore Foundation, the Heising-Simons Foundation, generous donations from many individuals through a UCSC Giving Day grant, and from fellowships from the Alfred P. Sloan Foundation (R.J.F.), the David and Lucile Packard Foundation (R.J.F. and E.R.) and the Niels Bohr Professorship from the DNRF (E.R.). AMB acknowledges support from a UCMEXUS-CONACYT Doctoral Fellowship. Support for this work was provided by NASA through Hubble Fellowship grants HST-HF-51348.001 (B.J.S.) and HST-HF-51373.001 (M.R.D.) awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. This paper includes data gathered with the 1 meter Swope and 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile.r (AGILE) The AGILE Team thanks the ASI management, the technical staff at the ASI Malindi ground station, the technical support team at the ASI Space Science Data Center, and the Fucino AGILE Mission Operation Center. AGILE is an ASI space mission developed with programmatic support by INAF and INFN. We acknowledge partial support through the ASI grant No. I/028/12/2. We also thank INAF, Italian Institute of Astrophysics, and ASI, Italian Space Agency.r (ANTARES) The ANTARES Collaboration acknowledges the financial support of: Centre National de la Recherche Scientifique (CNRS), Commissariat a l'energie atomique et aux energies alternatives (CEA), Commission Europeenne (FEDER fund and Marie Curie Program), Institut Universitaire de France (IUF), IdEx program and UnivEarthS Labex program at Sorbonne Paris Cite (ANR-10-LABX-0023 and ANR-11-IDEX-0005-02), Labex OCEVU (ANR-11-LABX-0060) and the A*MIDEX project (ANR-11-IDEX-0001-02), Region Ile-de-France (DIM-ACAV), Region Alsace (contrat CPER), Region Provence-Alpes-Cite d'Azur, Departement du Var and Ville de La Seyne-sur-Mer, France; Bundesministerium fur Bildung und Forschung (BMBF), Germany; Istituto Nazionale di Fisica Nucleare (INFN), Italy; Nederlandse organisatie voor Wetenschappelijk Onderzoek (NWO), the Netherlands; Council of the President of the Russian Federation for young scientists and leading scientific schools supporting grants, Russia; National Authority for Scientific Research (ANCS), Romania;...

Posted Content
TL;DR: This work proposes the convolution-augmented transformer for speech recognition, named Conformer, which significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies.
Abstract: Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at capturing content-based global interactions, while CNNs exploit local features effectively. In this work, we achieve the best of both worlds by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way. To this regard, we propose the convolution-augmented transformer for speech recognition, named Conformer. Conformer significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother. We also observe competitive performance of 2.7%/6.3% with a small model of only 10M parameters.

Proceedings Article
21 Feb 2015
TL;DR: Deeply-supervised nets (DSN) as discussed by the authors is a method that simultaneously minimizes classication error and improves the directness and transparency of the hidden layer learning process by introducing companion objective functions at each hidden layer, in addition to the overall objective function at the output layer.
Abstract: We propose deeply-supervised nets (DSN), a method that simultaneously minimizes classication error and improves the directness and transparency of the hidden layer learning process. We focus our attention on three aspects of traditional convolutional-neuralnetwork-type (CNN-type) architectures: (1) transparency in the eect intermediate layers have on overall classication; (2) discriminativeness and robustness of learned features, especially in early layers; (3) training effectiveness in the face of \vanishing" gradients. To combat these issues, we introduce \companion" objective functions at each hidden layer, in addition to the overall objective function at the output layer (an integrated strategy distinct from layer-wise pretraining). We also analyze our algorithm using techniques extended from stochastic gradient methods. The advantages provided by our method are evident in our experimental results, showing state-of-the-art performance on MNIST, CIFAR-10, CIFAR-100, and SVHN.

Journal ArticleDOI
TL;DR: An approach that combines repurposed pharmaceutical agents with other therapeutics has shown promising results in mitigating tumour burden, and this systematic review discusses important pathways commonly targeted in cancer therapy.
Abstract: // Reza Bayat Mokhtari 1,2,4 , Tina S. Homayouni 1 , Narges Baluch 3 , Evgeniya Morgatskaya 1 , Sushil Kumar 1 , Bikul Das 4 and Herman Yeger 1,2 1 Developmental and Stem Cell Biology, The Hospital for Sick Children, Toronto, Ontario, Canada 2 Department of Paediatric Laboratory Medicine, The Hospital for Sick Children and Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada 3 Department of Pathology and Molecular Medicine, Queen’s University, Kingston, Ontario, Canada 4 Department of Immunology and Infectious Diseases, The Forsyth Institute, Cambridge, Massachusetts, USA Correspondence to: Herman Yeger, email: // Reza Bayat Mokhtari, email: // Keywords : Nrf2-Keap1, HIF-1alpha, carbonic anhydrase 9 (CAIX), histone deacetylase inhibitor (HDACi), carbonic anhydrase inhibitor (CAI) Received : October 19, 2016 Accepted : February 27, 2017 Published : March 30, 2017 Abstract Combination therapy, a treatment modality that combines two or more therapeutic agents, is a cornerstone of cancer therapy. The amalgamation of anti-cancer drugs enhances efficacy compared to the mono-therapy approach because it targets key pathways in a characteristically synergistic or an additive manner. This approach potentially reduces drug resistance, while simultaneously providing therapeutic anti-cancer benefits, such as reducing tumour growth and metastatic potential, arresting mitotically active cells, reducing cancer stem cell populations, and inducing apoptosis. The 5-year survival rates for most metastatic cancers are still quite low, and the process of developing a new anti-cancer drug is costly and extremely time-consuming. Therefore, new strategies that target the survival pathways that provide efficient and effective results at an affordable cost are being considered. One such approach incorporates repurposing therapeutic agents initially used for the treatment of different diseases other than cancer. This approach is effective primarily when the FDA-approved agent targets similar pathways found in cancer. Because one of the drugs used in combination therapy is already FDA-approved, overall costs of combination therapy research are reduced. This increases cost efficiency of therapy, thereby benefiting the “medically underserved”. In addition, an approach that combines repurposed pharmaceutical agents with other therapeutics has shown promising results in mitigating tumour burden. In this systematic review, we discuss important pathways commonly targeted in cancer therapy. Furthermore, we also review important repurposed or primary anti-cancer agents that have gained popularity in clinical trials and research since 2012.

Proceedings ArticleDOI
01 Jun 2018
TL;DR: Deep Back-Projection Networks (DBPN) as discussed by the authors exploit iterative up-and downsampling layers, providing an error feedback mechanism for projection errors at each stage, and construct mutually-connected up and down-sampling stages each of which represents different types of image degradation and high-resolution components.
Abstract: The feed-forward architectures of recently proposed deep super-resolution networks learn representations of low-resolution inputs, and the non-linear mapping from those to high-resolution output. However, this approach does not fully address the mutual dependencies of low- and high-resolution images. We propose Deep Back-Projection Networks (DBPN), that exploit iterative up- and downsampling layers, providing an error feedback mechanism for projection errors at each stage. We construct mutually-connected up- and down-sampling stages each of which represents different types of image degradation and high-resolution components. We show that extending this idea to allow concatenation of features across up- and downsampling stages (Dense DBPN) allows us to reconstruct further improve super-resolution, yielding superior results and in particular establishing new state of the art results for large scaling factors such as 8A— across multiple data sets.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1151 moreInstitutions (125)
TL;DR: In this article, a GW signal from the merger of two stellar-mass black holes was observed by the two Advanced Laser Interferometer Gravitational-Wave Observatory detectors with a network signal-to-noise ratio of 13.5%.
Abstract: On 2017 June 8 at 02:01:16.49 UTC, a gravitational-wave (GW) signal from the merger of two stellar-mass black holes was observed by the two Advanced Laser Interferometer Gravitational-Wave Observatory detectors with a network signal-to-noise ratio of 13. This system is the lightest black hole binary so far observed, with component masses of ${12}_{-2}^{+7}\,{M}_{\odot }$ and ${7}_{-2}^{+2}\,{M}_{\odot }$ (90% credible intervals). These lie in the range of measured black hole masses in low-mass X-ray binaries, thus allowing us to compare black holes detected through GWs with electromagnetic observations. The source's luminosity distance is ${340}_{-140}^{+140}\,\mathrm{Mpc}$, corresponding to redshift ${0.07}_{-0.03}^{+0.03}$. We verify that the signal waveform is consistent with the predictions of general relativity.

Journal ArticleDOI
TL;DR: The 12th generation of the International Geomagnetic Reference Field (IGRF) was adopted in December 2014 by the Working Group V-MOD appointed by the International Association of Geomagnetism and Aeronomy (IAGA) as discussed by the authors.
Abstract: The 12th generation of the International Geomagnetic Reference Field (IGRF) was adopted in December 2014 by the Working Group V-MOD appointed by the International Association of Geomagnetism and Aeronomy (IAGA). It updates the previous IGRF generation with a definitive main field model for epoch 2010.0, a main field model for epoch 2015.0, and a linear annual predictive secular variation model for 2015.0-2020.0. Here, we present the equations defining the IGRF model, provide the spherical harmonic coefficients, and provide maps of the magnetic declination, inclination, and total intensity for epoch 2015.0 and their predicted rates of change for 2015.0-2020.0. We also update the magnetic pole positions and discuss briefly the latest changes and possible future trends of the Earth’s magnetic field.

Journal ArticleDOI
TL;DR: The preponderant burden of HPV16/18 and the possibility of cross‐protection emphasize the importance of the introduction of more affordable vaccines in less developed countries.
Abstract: HPV is the cause of almost all cervical cancer and is responsible for a substantial fraction of other anogenital cancers and oropharyngeal cancers. Understanding the HPV-attributable cancer burden can boost programs of HPV vaccination and HPV-based cervical screening. Attributable fractions (AFs) and the relative contributions of different HPV types were derived from published studies reporting on the prevalence of transforming HPV infection in cancer tissue. Maps of age-standardized incidence rates of HPV-attributable cancers by country from GLOBOCAN 2012 data are shown separately for the cervix, other anogenital tract and head and neck cancers. The relative contribution of HPV16/18 and HPV6/11/16/18/31/33/45/52/58 was also estimated. 4.5% of all cancers worldwide (630,000 new cancer cases per year) are attributable to HPV: 8.6% in women and 0.8% in men. AF in women ranges from 20% in India and sub-Saharan Africa. Cervix accounts for 83% of HPV-attributable cancer, two-thirds of which occur in less developed countries. Other HPV-attributable anogenital cancer includes 8,500 vulva; 12,000 vagina; 35,000 anus (half occurring in men) and 13,000 penis. In the head and neck, HPV-attributable cancers represent 38,000 cases of which 21,000 are oropharyngeal cancers occurring in more developed countries. The relative contributions of HPV16/18 and HPV6/11/16/18/31/33/45/52/58 are 73% and 90%, respectively. Universal access to vaccination is the key to avoiding most cases of HPV-attributable cancer. The preponderant burden of HPV16/18 and the possibility of cross-protection emphasize the importance of the introduction of more affordable vaccines in less developed countries.

Journal ArticleDOI
26 Jan 2016-ACS Nano
TL;DR: It is found that ligand binding to the NC surface is highly dynamic, and therefore, ligands are easily lost during the isolation and purification procedures, and when a small amount of both oleic acid and oleylamine is added, the NCs can be purified, maintaining optical, colloidal, and material integrity.
Abstract: Lead halide perovskite materials have attracted significant attention in the context of photovoltaics and other optoelectronic applications, and recently, research efforts have been directed to nanostructured lead halide perovskites. Collodial nanocrystals (NCs) of cesium lead halides (CsPbX3, X = Cl, Br, I) exhibit bright photoluminescence, with emission tunable over the entire visible spectral region. However, previous studies on CsPbX3 NCs did not address key aspects of their chemistry and photophysics such as surface chemistry and quantitative light absorption. Here, we elaborate on the synthesis of CsPbBr3 NCs and their surface chemistry. In addition, the intrinsic absorption coefficient was determined experimentally by combining elemental analysis with accurate optical absorption measurements. 1H solution nuclear magnetic resonance spectroscopy was used to characterize sample purity, elucidate the surface chemistry, and evaluate the influence of purification methods on the surface composition. We fi...

Journal ArticleDOI
TL;DR: In this article, the authors present a state-of-the-art review that presents a holistic view of the BD challenges and BDA methods theorized/proposed/employed by organizations to help others understand this landscape with the objective of making robust investment decisions.

Journal Article
TL;DR: The GI Bill brought eight million veterans back to campus, which sparked in this country a revolution of rising expectations and there's no turning back as discussed by the authors. But many academics questioned the wisdom of inviting GIs to campus; after all, these men hadn't passed the SATs-they'd simply gone off to war, and what did they know except survival.
Abstract: The goals were rooted in practical reality and aimed toward useful ends. In the 1940s the GI Bill brought eight million veterans back to campus, which sparked in this country a revolution of rising expectations. May I whisper that professors were not at the forefront urging the GI Bill; this initiative came from Congress. Many academics, in fact, questioned the wisdom of inviting GIs to campus; after all, these men hadn't passed the SATs-they'd simply gone off to war, and what did they know except survival? The story gets even grimmer. I read some years ago that the dean of admissions at one of the wellknown institutions in the country opposed the GIs because, he argued, many of them would be married; they would bring baby carriages to campus, and even contaminate the young undergraduates with bad ideas at that pristine institution. I think he knew little about GIs and even less about the undergraduates at his own college. But putting that resistance aside, the point is largely made that the universities joined in an absolutely spectacular experiment, in a cultural commitment to rising expectationsand what was for the GIs a privilege became for their children and grandchildren an absolute right. And there's no turning back. Almost coincidentally, Secretary of State

01 Mar 2015
TL;DR: The U.S. population is projected to grow more slowly in future decades than in the recent past, as these projections assume that fertility rates will continue to decline and that there will be a modest decline in the overall rate of net international migration as discussed by the authors.
Abstract: Between 2014 and 2060, the U.S. population is projected to increase from 319 million to 417 million, reaching 400 million in 2051. The U.S. population is projected to grow more slowly in future decades than in the recent past, as these projections assume that fertility rates will continue to decline and that there will be a modest decline in the overall rate of net international migration. By 2030, one in five Americans is projected to be 65 and over; by 2044, more than half of all Americans are projected to belong to a minority group (any group other than non-Hispanic White alone); and by 2060, nearly one in five of the nation’s total population is projected to be foreign born.