scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The current landscape of available tools is reviewed, the principles of error correction, base modification detection, and long-read transcriptomics analysis are focused on, and the challenges that remain are highlighted.
Abstract: Long-read technologies are overcoming early limitations in accuracy and throughput, broadening their application domains in genomics. Dedicated analysis tools that take into account the characteristics of long-read data are thus required, but the fast pace of development of such tools can be overwhelming. To assist in the design and analysis of long-read sequencing projects, we review the current landscape of available tools and present an online interactive database, long-read-tools.org, to facilitate their browsing. We further focus on the principles of error correction, base modification detection, and long-read transcriptomics analysis and highlight the challenges that remain.

1,172 citations


Journal ArticleDOI
TL;DR: For instance, this paper found that social networks and search engines are associated with an increase in the mean ideological distance between individuals, and that the magnitude of the effects is relatively modest, while also finding that the vast majority of online news consumption is accounted for by individuals simply visiting the home pages of their favorite, typically mainstream, news outlets.
Abstract: Online publishing, social networks, and web search have dramatically lowered the costs of producing, distributing, and discovering news articles. Some scholars argue that such technological changes increase exposure to diverse perspectives, while others worry that they increase ideological segregation. We address the issue by examining web-browsing histories for 50,000 US-located users who regularly read online news. We find that social networks and search engines are associated with an increase in the mean ideological distance between individuals. However, somewhat counterintuitively, these same channels also are associated with an increase in an individual’s exposure to material from his or her less preferred side of the political spectrum. Finally, the vast majority of online news consumption is accounted for by individuals simply visiting the home pages of their favorite, typically mainstream, news outlets, tempering the consequences—both positive and negative—of recent technological changes. We thus uncover evidence for both sides of the debate, while also finding that the magnitude of the effects is relatively modest.

1,171 citations


Journal ArticleDOI
20 Feb 2015-Science
TL;DR: A network-based framework to identify the location of disease modules within the interactome and use the overlap between the modules to predict disease-disease relationships is presented and it is found that disease pairs with overlapping disease modules display significant molecular similarity, elevated coexpression of their associated genes, and similar symptoms and high comorbidity.
Abstract: According to the disease module hypothesis, the cellular components associated with a disease segregate in the same neighborhood of the human interactome, the map of biologically relevant molecular interactions. Yet, given the incompleteness of the interactome and the limited knowledge of disease-associated genes, it is not obvious if the available data have sufficient coverage to map out modules associated with each disease. Here we derive mathematical conditions for the identifiability of disease modules and show that the network-based location of each disease module determines its pathobiological relationship to other diseases. For example, diseases with overlapping network modules show significant coexpression patterns, symptom similarity, and comorbidity, whereas diseases residing in separated network neighborhoods are phenotypically distinct. These tools represent an interactome-based platform to predict molecular commonalities between phenotypically related diseases, even if they do not share primary disease genes.

1,171 citations


Posted Content
TL;DR: In this paper, the authors introduce intelligent synapses that bring some of this biological complexity into artificial neural networks, and evaluate their approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.
Abstract: While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.

1,171 citations


Journal ArticleDOI
TL;DR: Two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences are presented, including the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum information about a Metagenome-Assembled Genomes (MIMAG), including estimates of genome completeness and contamination.
Abstract: We present two standards developed by the Genomic Standards Consortium (GSC) for reporting bacterial and archaeal genome sequences. Both are extensions of the Minimum Information about Any (x) Sequence (MIxS). The standards are the Minimum Information about a Single Amplified Genome (MISAG) and the Minimum Information about a Metagenome-Assembled Genome (MIMAG), including, but not limited to, assembly quality, and estimates of genome completeness and contamination. These standards can be used in combination with other GSC checklists, including the Minimum Information about a Genome Sequence (MIGS), Minimum Information about a Metagenomic Sequence (MIMS), and Minimum Information about a Marker Gene Sequence (MIMARKS). Community-wide adoption of MISAG and MIMAG will facilitate more robust comparative genomic analyses of bacterial and archaeal diversity.

1,171 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this article, a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise is presented, and two procedures for loss correction that are agnostic to both application domain and network architecture are proposed.
Abstract: We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures — stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers — demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.

1,171 citations


Journal ArticleDOI
15 Apr 2017-Geoderma
TL;DR: In this paper, the authors surveyed the soil organic carbon (SOC) stock estimates and sequestration potentials from 20 regions in the world (New Zealand, Chile, South Africa, Australia, Tanzania, Indonesia, Kenya, Nigeria, India, China Taiwan, South Korea, China Mainland, United States of America, France, Canada, Belgium, England & Wales, Ireland, Scotland, and Russia).

1,171 citations


Journal ArticleDOI
TL;DR: The Genomic Data Commons will initially house raw genomic data and diagnostic, histologic, and clinical outcome data from National Cancer Institute–funded projects, and will align sequencing data to the genome and identify mutations and alterations.
Abstract: The Genomic Data Commons will initially house raw genomic data and diagnostic, histologic, and clinical outcome data from National Cancer Institute–funded projects. A harmonization process will align sequencing data to the genome and identify mutations and alterations.

1,170 citations


Journal ArticleDOI
TL;DR: Pembrolizumab led to significantly longer progression-free survival than chemotherapy when received as first-line therapy for MSI-H-dMMR metastatic colorectal cancer, with fewer treatment-related adverse events.
Abstract: Background Programmed death 1 (PD-1) blockade has clinical benefit in microsatellite-instability–high (MSI-H) or mismatch-repair–deficient (dMMR) tumors after previous therapy. The efficac...

1,169 citations


Journal ArticleDOI
TL;DR: This review aims to summarize the emerging efforts to address current challenges and solutions in the treatment of infectious diseases, particularly the use of nanosilver antimicrobials.
Abstract: Multi-drug resistance is a growing problem in the treatment of infectious diseases and the widespread use of broad-spectrum antibiotics has produced antibiotic resistance for many human bacterial pathogens. Advances in nanotechnology have opened new horizons in nanomedicine, allowing the synthesis of nanoparticles that can be assembled into complex architectures. Novel studies and technologies are devoted to understanding the mechanisms of disease for the design of new drugs, but unfortunately infectious diseases continue to be a major health burden worldwide. Since ancient times, silver was known for its anti-bacterial effects and for centuries it has been used for prevention and control of disparate infections. Currently nanotechnology and nanomaterials are fully integrated in common applications and objects that we use every day. In addition, the silver nanoparticles are attracting much interest because of their potent antibacterial activity. Many studies have also shown an important activity of silver nanoparticles against bacterial biofilms. This review aims to summarize the emerging efforts to address current challenges and solutions in the treatment of infectious diseases, particularly the use of nanosilver antimicrobials.

1,169 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce the concept of meaning making in research methods and look at how meaning is generated from qualitative data analysis specifically, and some examples from the literature of how meaning can be constructed and organized using a qualitative Data Analysis approach are provided.
Abstract: An introduction and explanation of the epistemological differences of quantitative and qualitative research paradigms is first provided, followed by an overview of the realist philosophical paradigm, which attempts to accommodate the two. From this foundational discussion, the paper then introduces the concept of meaning making in research methods and looks at how meaning is generated from qualitative data analysis specifically. Finally, some examples from the literature of how meaning can be constructed and organized using a qualitative data analysis approach are provided. The paper aims to provide an introduction to research methodologies, coupled with a discussion on how meaning making actually occurs through qualitative data analysis. Key Words: Qualitative Research, Quantitative Research, Epistemology, Meaning Making, and Qualitative Data Analysis

Journal ArticleDOI
16 Dec 2016-Science
TL;DR: Reactive molecular dynamics simulations suggest that highly stressed, undercoordinated rhombus-rich surface configurations of the jagged nanowires enhance ORR activity versus more relaxed surfaces.
Abstract: Improving the platinum (Pt) mass activity for the oxygen reduction reaction (ORR) requires optimization of both the specific activity and the electrochemically active surface area (ECSA). We found that solution-synthesized Pt/NiO core/shell nanowires can be converted into PtNi alloy nanowires through a thermal annealing process and then transformed into jagged Pt nanowires via electrochemical dealloying. The jagged nanowires exhibit an ECSA of 118 square meters per gram of Pt and a specific activity of 11.5 milliamperes per square centimeter for ORR (at 0.9 volts versus reversible hydrogen electrode), yielding a mass activity of 13.6 amperes per milligram of Pt, nearly double previously reported best values. Reactive molecular dynamics simulations suggest that highly stressed, undercoordinated rhombus-rich surface configurations of the jagged nanowires enhance ORR activity versus more relaxed surfaces.

Journal ArticleDOI
TL;DR: It is suggested data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims, which describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system.
Abstract: There has been much discussion of the “right to explanation” in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the ‘black box’ of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Data controllers have an interest to not disclose information about their algorithms that contains trade secrets, violates the rights and freedoms of others (e.g. privacy), or allows data subjects to game or manipulate decision-making. Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support. From the perspective of individuals affected by automated decision-making, we propose three aims for explanations: (1) to inform and help the individual understand why a particular decision was reached, (2) to provide grounds to contest the decision if the outcome is undesired, and (3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model. We assess how each of these goals finds support in the GDPR, and the extent to which they hinge on opening the ‘black box’. We suggest data controllers should offer a particular type of explanation, ‘unconditional counterfactual explanations’, to support these three aims. These counterfactual explanations describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the “closest possible world.” As multiple variables or sets of variables can lead to one or more desirable outcomes, multiple counterfactual explanations can be provided, corresponding to different choices of nearby possible worlds for which the counterfactual holds. Counterfactuals describe a dependency on the external facts that lead to that decision without the need to convey the internal state or logic of an algorithm. As a result, counterfactuals serve as a minimal solution that bypasses the current technical limitations of interpretability, while striking a balance between transparency and the rights and freedoms of others (e.g. privacy, trade secrets).

Journal ArticleDOI
Jixiang Zhang1, Xiaoli Wang1, Vikash Vikash1, Qing Ye1, Dandan Wu1, Yu-Lan Liu1, Weiguo Dong1 
TL;DR: This review paper focuses on the pattern of the generation and homeostasis of intracellular ROS, the mechanisms and targets of ROS impacting on cell-signaling proteins, ion channels and transporters, and modifying protein kinase and Ubiquitination/Proteasome System.
Abstract: It has long been recognized that an increase of reactive oxygen species (ROS) can modify the cell-signaling proteins and have functional consequences, which successively mediate pathological processes such as atherosclerosis, diabetes, unchecked growth, neurodegeneration, inflammation, and aging. While numerous articles have demonstrated the impacts of ROS on various signaling pathways and clarify the mechanism of action of cell-signaling proteins, their influence on the level of intracellular ROS, and their complex interactions among multiple ROS associated signaling pathways, the systemic summary is necessary. In this review paper, we particularly focus on the pattern of the generation and homeostasis of intracellular ROS, the mechanisms and targets of ROS impacting on cell-signaling proteins (NF-κB, MAPKs, Keap1-Nrf2-ARE, and PI3K-Akt), ion channels and transporters (Ca(2+) and mPTP), and modifying protein kinase and Ubiquitination/Proteasome System.

Journal ArticleDOI
TL;DR: Patients who meet clinical criteria for a syndrome as well as those with identified pathogenic germline mutations should receive appropriate surveillance measures in order to minimize their overall risk of developing syndrome-specific cancers.

Proceedings Article
26 Feb 2019
TL;DR: Experimental results show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction.
Abstract: We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. The success of such a task heavily relies on the ability of modeling and inferring the patterns of (or between) the relations. In this paper, we present a new approach for knowledge graph embedding called RotatE, which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. In addition, we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the RotatE model. Experimental results on multiple benchmark knowledge graphs show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction.

Posted Content
TL;DR: This work presents a technique for adding global context to deep convolutional networks for semantic segmentation, and achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines.
Abstract: We present a technique for adding global context to deep convolutional networks for semantic segmentation. The approach is simple, using the average feature for a layer to augment the features at each location. In addition, we study several idiosyncrasies of training, significantly increasing the performance of baseline networks (e.g. from FCN). When we add our proposed global feature, and a technique for learning normalization parameters, accuracy increases consistently even over our improved versions of the baselines. Our proposed approach, ParseNet, achieves state-of-the-art performance on SiftFlow and PASCAL-Context with small additional computational cost over baselines, and near current state-of-the-art performance on PASCAL VOC 2012 semantic segmentation with a simple approach. Code is available at this https URL .

Journal ArticleDOI
TL;DR: A practical approach to forecasting “at scale” that combines configurable models with analyst-in-the-loop performance analysis, and a modular regression model with interpretable parameters that can be intuitively adjusted by analysts with domain knowledge about the time series are described.
Abstract: Forecasting is a common data science task that helps organizations with capacity planning, goal setting, and anomaly detection. Despite its importance, there are serious challenges associated with ...

Journal ArticleDOI
TL;DR: Modifications to the Gleason grading system are incorporated into the 2016 WHO section on grading of prostate cancer, and it is recommended that the percentage of pattern 4 should be reported for Gleason score 7.

Journal ArticleDOI
Markus Ackermann, Andrea Albert1, Brandon Anderson2, W. B. Atwood3, Luca Baldini1, Guido Barbiellini4, Denis Bastieri4, Keith Bechtol5, Ronaldo Bellazzini4, Elisabetta Bissaldi4, Roger Blandford1, E. D. Bloom1, R. Bonino4, Eugenio Bottacini1, T. J. Brandt6, Johan Bregeon7, P. Bruel8, R. Buehler, G. A. Caliandro1, R. A. Cameron1, R. Caputo3, M. Caragiulo4, P. A. Caraveo9, C. Cecchi4, Eric Charles1, A. Chekhtman10, James Chiang1, G. Chiaro11, Stefano Ciprini4, R. Claus1, Johann Cohen-Tanugi7, Jan Conrad2, Alessandro Cuoco4, S. Cutini4, Filippo D'Ammando9, A. De Angelis4, F. de Palma4, R. Desiante4, Seth Digel1, L. Di Venere12, Persis S. Drell1, Alex Drlica-Wagner13, R. Essig14, C. Favuzzi4, S. J. Fegan8, Elizabeth C. Ferrara6, W. B. Focke1, A. Franckowiak1, Yasushi Fukazawa15, Stefan Funk, P. Fusco4, F. Gargano4, Dario Gasparrini4, Nicola Giglietto4, Francesco Giordano4, Marcello Giroletti9, T. Glanzman1, G. Godfrey1, G. A. Gomez-Vargas4, I. A. Grenier16, Sylvain Guiriec6, M. Gustafsson17, E. Hays6, John W. Hewitt18, D. Horan8, T. Jogler1, Gudlaugur Johannesson19, M. Kuss4, Stefan Larsson2, Luca Latronico4, Jingcheng Li20, L. Li2, M. Llena Garde2, Francesco Longo4, F. Loparco4, P. Lubrano4, D. Malyshev1, M. Mayer, M. N. Mazziotta4, Julie McEnery6, Manuel Meyer2, Peter F. Michelson1, Tsunefumi Mizuno15, A. A. Moiseev21, M. E. Monzani1, A. Morselli4, S. Murgia22, E. Nuss7, T. Ohsugi15, M. Orienti9, E. Orlando1, J. F. Ormes23, David Paneque1, J. S. Perkins6, Melissa Pesce-Rollins1, F. Piron7, G. Pivato4, T. A. Porter1, S. Rainò4, R. Rando4, M. Razzano4, A. Reimer1, Olaf Reimer1, Steven Ritz3, Miguel A. Sánchez-Conde2, André Schulz, Neelima Sehgal24, Carmelo Sgrò4, E. J. Siskind, F. Spada4, Gloria Spandre4, P. Spinelli4, Louis E. Strigari25, Hiroyasu Tajima1, Hiromitsu Takahashi15, J. B. Thayer1, L. Tibaldo1, Diego F. Torres20, Eleonora Troja6, Giacomo Vianello1, Michael David Werner, Brian L Winer26, K. S. Wood27, Matthew Wood1, Gabrijela Zaharijas4, Stephan Zimmer2 
TL;DR: In this article, the authors report on γ-ray observations of the Milky-Way satellite galaxies (dSphs) based on six years of Fermi Large Area Telescope data processed with the new Pass8 event-level analysis.
Abstract: The dwarf spheroidal satellite galaxies (dSphs) of the Milky Way are some of the most dark matter (DM) dominated objects known. We report on γ-ray observations of Milky Way dSphs based on six years of Fermi Large Area Telescope data processed with the new Pass8 event-level analysis. None of the dSphs are significantly detected in γ rays, and we present upper limits on the DM annihilation cross section from a combined analysis of 15 dSphs. These constraints are among the strongest and most robust to date and lie below the canonical thermal relic cross section for DM of mass ≲100 GeV annihilating via quark and τ-lepton channels.

Journal ArticleDOI
TL;DR: Preliminary findings suggest that in the United States, persons with underlying health conditions or other recognized risk factors for severe outcomes from respiratory infections appear to be at a higher risk for severe disease from COVID-19 than are persons without these conditions.
Abstract: On March 11, 2020, the World Health Organization declared Coronavirus Disease 2019 (COVID-19) a pandemic (1). As of March 28, 2020, a total of 571,678 confirmed COVID-19 cases and 26,494 deaths have been reported worldwide (2). Reports from China and Italy suggest that risk factors for severe disease include older age and the presence of at least one of several underlying health conditions (3,4). U.S. older adults, including those aged ≥65 years and particularly those aged ≥85 years, also appear to be at higher risk for severe COVID-19-associated outcomes; however, data describing underlying health conditions among U.S. COVID-19 patients have not yet been reported (5). As of March 28, 2020, U.S. states and territories have reported 122,653 U.S. COVID-19 cases to CDC, including 7,162 (5.8%) for whom data on underlying health conditions and other known risk factors for severe outcomes from respiratory infections were reported. Among these 7,162 cases, 2,692 (37.6%) patients had one or more underlying health condition or risk factor, and 4,470 (62.4%) had none of these conditions reported. The percentage of COVID-19 patients with at least one underlying health condition or risk factor was higher among those requiring intensive care unit (ICU) admission (358 of 457, 78%) and those requiring hospitalization without ICU admission (732 of 1,037, 71%) than that among those who were not hospitalized (1,388 of 5,143, 27%). The most commonly reported conditions were diabetes mellitus, chronic lung disease, and cardiovascular disease. These preliminary findings suggest that in the United States, persons with underlying health conditions or other recognized risk factors for severe outcomes from respiratory infections appear to be at a higher risk for severe disease from COVID-19 than are persons without these conditions.

Posted ContentDOI
Spyridon Bakas1, Mauricio Reyes, Andras Jakab2, Stefan Bauer3  +435 moreInstitutions (111)
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumoris a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses thestate-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross tota lresection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

Proceedings Article
Pavlo Molchanov1, Stephen Tyree1, Tero Karras1, Timo Aila1, Jan Kautz1 
04 Nov 2016
TL;DR: It is shown that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier.
Abstract: We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.

Proceedings ArticleDOI
22 May 2017
TL;DR: This paper presents new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method, and implements the first privacy preserving system for training neural networks.
Abstract: Machine learning is widely used in practice to produce predictive models for applications such as image processing, speech and text recognition. These models are more accurate when trained on large amount of data collected from different sources. However, the massive data collection raises privacy concerns. In this paper, we present new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method. Our protocols fall in the two-server model where data owners distribute their private data among two non-colluding servers who train various models on the joint data using secure two-party computation (2PC). We develop new techniques to support secure arithmetic operations on shared decimal numbers, and propose MPC-friendly alternatives to non-linear functions such as sigmoid and softmax that are superior to prior work. We implement our system in C++. Our experiments validate that our protocols are several orders of magnitude faster than the state of the art implementations for privacy preserving linear and logistic regressions, and scale to millions of data samples with thousands of features. We also implement the first privacy preserving system for training neural networks.

Journal ArticleDOI
TL;DR: In this article, the authors describe the meaning of theme and offer a method on theme construction that can be used by qualitative content analysis and thematic analysis researchers in line with the underpinning specific approach to data analysis.
Abstract: Sufficient knowledge is available about the definition, details and differences of qualitative content and thematic analysis as two approaches of qualitative descriptive research. However, identifying the main features of theme as the data analysis product and the method of its development remain unclear. The purpose of this study was to describe the meaning of theme and offer a method on theme construction that can be used by qualitative content analysis and thematic analysis researchers in line with the underpinning specific approach to data analysis. This methodological paper comprises an analytical overview of qualitative descriptive research products and the meaning of theme. Also, our practical experiences of qualitative analysis supported by relevant published literature informed the generation of a stage like model of theme construction for qualitative content analysis and thematic analysis. This paper comprises: (i) analytical importance of theme, (ii) meaning of theme, (iii) meaning of category, (iv) theme and category in terms of level of content, and (v) theme development. This paper offers a conceptual clarification and a pragmatic step by step method of theme development that has the capacity of assisting nurse researchers understand how theme is developed. As nursing is a pragmatic discipline, nurse researchers have tried to develop practical findings and devise some way to “do something” with findings to enhance the action and impact of nursing. The application of a precise method of theme development for qualitative descriptive data analysis suggested in this paper helps yield meaningful, credible and practical results for nursing.

Journal ArticleDOI
TL;DR: In participants with atrial fibrillation undergoing PCI with placement of stents, the administration of either low-dose rivaroxaban plus a P2Y12 inhibitor for 12 months or very-low-dose thrombolysis in Myocardial Infarction plus DAPT for 1, 6, or 12 months was associated with a lower rate of clinically significant bleeding.
Abstract: BackgroundIn patients with atrial fibrillation undergoing percutaneous coronary intervention (PCI) with placement of stents, standard anticoagulation with a vitamin K antagonist plus dual antiplatelet therapy (DAPT) with a P2Y12 inhibitor and aspirin reduces the risk of thrombosis and stroke but increases the risk of bleeding. The effectiveness and safety of anticoagulation with rivaroxaban plus either one or two antiplatelet agents are uncertain. MethodsWe randomly assigned 2124 participants with nonvalvular atrial fibrillation who had undergone PCI with stenting to receive, in a 1:1:1 ratio, low-dose rivaroxaban (15 mg once daily) plus a P2Y12 inhibitor for 12 months (group 1), very-low-dose rivaroxaban (2.5 mg twice daily) plus DAPT for 1, 6, or 12 months (group 2), or standard therapy with a dose-adjusted vitamin K antagonist (once daily) plus DAPT for 1, 6, or 12 months (group 3). The primary safety outcome was clinically significant bleeding (a composite of major bleeding or minor bleeding according...

Journal ArticleDOI
TL;DR: Evidence is provided that micro-PS cause feeding modifications and reproductive disruption in oysters, with significant impacts on offspring, providing ground-breaking data on microplastic impacts in an invertebrate model, helping to predict ecological impact in marine ecosystems.
Abstract: Plastics are persistent synthetic polymers that accumulate as waste in the marine environment. Microplastic (MP) particles are derived from the breakdown of larger debris or can enter the environment as microscopic fragments. Because filter-feeder organisms ingest MP while feeding, they are likely to be impacted by MP pollution. To assess the impact of polystyrene microspheres (micro-PS) on the physiology of the Pacific oyster, adult oysters were experimentally exposed to virgin micro-PS (2 and 6 µm in diameter; 0.023 mg·L−1) for 2 mo during a reproductive cycle. Effects were investigated on ecophysiological parameters; cellular, transcriptomic, and proteomic responses; fecundity; and offspring development. Oysters preferentially ingested the 6-µm micro-PS over the 2-µm-diameter particles. Consumption of microalgae and absorption efficiency were significantly higher in exposed oysters, suggesting compensatory and physical effects on both digestive parameters. After 2 mo, exposed oysters had significant decreases in oocyte number (−38%), diameter (−5%), and sperm velocity (−23%). The D-larval yield and larval development of offspring derived from exposed parents decreased by 41% and 18%, respectively, compared with control offspring. Dynamic energy budget modeling, supported by transcriptomic profiles, suggested a significant shift of energy allocation from reproduction to structural growth, and elevated maintenance costs in exposed oysters, which is thought to be caused by interference with energy uptake. Molecular signatures of endocrine disruption were also revealed, but no endocrine disruptors were found in the biological samples. This study provides evidence that micro-PS cause feeding modifications and reproductive disruption in oysters, with significant impacts on offspring.

Journal ArticleDOI
TL;DR: The atmospheric fallout of microplastics was investigated in two different urban and sub-urban sites and a rough estimation was shown showing that between 3 and 10 tons of fibers are deposited by atmospheric fallout at the scale of the Parisian agglomeration every year.

Journal ArticleDOI
TL;DR: Several definitions of a ‘healthy microbiome’ that have emerged are reviewed, the current understanding of the ranges of healthy microbial diversity, and gaps such as the characterization of molecular function and the development of ecological therapies to be addressed in the future are reviewed.
Abstract: Humans are virtually identical in their genetic makeup, yet the small differences in our DNA give rise to tremendous phenotypic diversity across the human population. By contrast, the metagenome of the human microbiome—the total DNA content of microbes inhabiting our bodies—is quite a bit more variable, with only a third of its constituent genes found in a majority of healthy individuals. Understanding this variability in the “healthy microbiome” has thus been a major challenge in microbiome research, dating back at least to the 1960s, continuing through the Human Microbiome Project and beyond. Cataloguing the necessary and sufficient sets of microbiome features that support health, and the normal ranges of these features in healthy populations, is an essential first step to identifying and correcting microbial configurations that are implicated in disease. Toward this goal, several population-scale studies have documented the ranges and diversity of both taxonomic compositions and functional potentials normally observed in the microbiomes of healthy populations, along with possible driving factors such as geography, diet, and lifestyle. Here, we review several definitions of a ‘healthy microbiome’ that have emerged, the current understanding of the ranges of healthy microbial diversity, and gaps such as the characterization of molecular function and the development of ecological therapies to be addressed in the future.

Journal ArticleDOI
28 Oct 2020-eLife
TL;DR: It is shown that functional SARS-CoV-2 S protein variants with mutations in the receptor-binding domain (RBD) and N-terminal domain that confer resistance to monoclonal antibodies or convalescent plasma can be readily selected.
Abstract: Neutralizing antibodies elicited by prior infection or vaccination are likely to be key for future protection of individuals and populations against SARS-CoV-2. Moreover, passively administered antibodies are among the most promising therapeutic and prophylactic anti-SARS-CoV-2 agents. However, the degree to which SARS-CoV-2 will adapt to evade neutralizing antibodies is unclear. Using a recombinant chimeric VSV/SARS-CoV-2 reporter virus, we show that functional SARS-CoV-2 S protein variants with mutations in the receptor-binding domain (RBD) and N-terminal domain that confer resistance to monoclonal antibodies or convalescent plasma can be readily selected. Notably, SARS-CoV-2 S variants that resist commonly elicited neutralizing antibodies are now present at low frequencies in circulating SARS-CoV-2 populations. Finally, the emergence of antibody-resistant SARS-CoV-2 variants that might limit the therapeutic usefulness of monoclonal antibodies can be mitigated by the use of antibody combinations that target distinct neutralizing epitopes.