scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The theory of weak gravitational lensing is discussed in this paper, and applications to galaxies, galaxy clusters and larger-scale structures in the universe are reviewed and summarised in detail.
Abstract: According to the theory of general relativity, masses deflect light in a way similar to convex glass lenses. This gravitational lensing effect is astigmatic, giving rise to image distortions. These distortions allow to quantify cosmic structures statistically on a broad range of scales, and to map the spatial distribution of dark and visible matter. We summarise the theory of weak gravitational lensing and review applications to galaxies, galaxy clusters and larger-scale structures in the Universe.

1,761 citations


Journal ArticleDOI
TL;DR: While the intrinsic complexity of natural product-based drug discovery necessitates highly integrated interdisciplinary approaches, the reviewed scientific developments, recent technological advances, and research trends clearly indicate that natural products will be among the most important sources of new drugs in the future.

1,760 citations


Journal ArticleDOI
TL;DR: A single dose of Ad26.COV2.S protected against symptomatic Covid-19 and asymptomatic SARS-CoV-2 infection and was effective against severe–critical disease, including hospitalization and death, in an international, randomized, double-blind, placebo-controlled, phase 3 trial.
Abstract: Background The Ad26.COV2.S vaccine is a recombinant, replication-incompetent human adenovirus type 26 vector encoding full-length severe acute respiratory syndrome coronavirus 2 (SARS-CoV-...

1,760 citations


Proceedings ArticleDOI
TL;DR: In this article, a large-scale synthetic stereo video dataset is proposed to enable training and evaluation of optical flow estimation with a convolutional network and disparity estimation with CNNs.
Abstract: Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluating scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.

1,759 citations


Journal ArticleDOI
TL;DR: The experiments define the most rigorous framework for genome-wide identification of RGN off-target effects to date and provide a method for evaluating the safety of these nucleases before clinical use.
Abstract: CRISPR RNA-guided nucleases (RGNs) are widely used genome-editing reagents, but methods to delineate their genome-wide, off-target cleavage activities have been lacking. Here we describe an approach for global detection of DNA double-stranded breaks (DSBs) introduced by RGNs and potentially other nucleases. This method, called genome-wide, unbiased identification of DSBs enabled by sequencing (GUIDE-seq), relies on capture of double-stranded oligodeoxynucleotides into DSBs. Application of GUIDE-seq to 13 RGNs in two human cell lines revealed wide variability in RGN off-target activities and unappreciated characteristics of off-target sequences. The majority of identified sites were not detected by existing computational methods or chromatin immunoprecipitation sequencing (ChIP-seq). GUIDE-seq also identified RGN-independent genomic breakpoint 'hotspots'. Finally, GUIDE-seq revealed that truncated guide RNAs exhibit substantially reduced RGN-induced, off-target DSBs. Our experiments define the most rigorous framework for genome-wide identification of RGN off-target effects to date and provide a method for evaluating the safety of these nucleases before clinical use.

1,759 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of the research on computation offloading in mobile edge computing (MEC), focusing on user-oriented use cases and reference scenarios where the MEC is applicable.
Abstract: Technological evolution of mobile user equipments (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. Suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud (CC). Nevertheless, this option introduces significant execution delay consisting in delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such delay is inconvenient and make the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: i) decision on computation offloading, ii) allocation of computing resource within the MEC, and iii) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.

1,759 citations


Journal ArticleDOI
TL;DR: Among patients with heart failure and moderate‐to‐severe or severe secondary mitral regurgitation who remained symptomatic despite the use of maximal doses of guideline‐directed medical therapy, transcatheter mitral‐valve repair resulted in a lower rate of hospitalization forHeart failure and lower all‐cause mortality within 24 months of follow‐up than medical therapy alone.
Abstract: Background Among patients with heart failure who have mitral regurgitation due to left ventricular dysfunction, the prognosis is poor Transcatheter mitral-valve repair may improve their clinical outcomes Methods At 78 sites in the United States and Canada, we enrolled patients with heart failure and moderate-to-severe or severe secondary mitral regurgitation who remained symptomatic despite the use of maximal doses of guideline-directed medical therapy Patients were randomly assigned to transcatheter mitral-valve repair plus medical therapy (device group) or medical therapy alone (control group) The primary effectiveness end point was all hospitalizations for heart failure within 24 months of follow-up The primary safety end point was freedom from device-related complications at 12 months; the rate for this end point was compared with a prespecified objective performance goal of 880% Results Of the 614 patients who were enrolled in the trial, 302 were assigned to the device group and 312 t

1,758 citations


Journal ArticleDOI
TL;DR: The Scenario Model Intercomparison Project (ScenarioMIP) as discussed by the authors is the primary activity within Phase 6 of the Coupled Model Comparison Project (CMIP6) that will provide multi-model climate projections based on alternative scenarios of future emissions and land use changes produced with integrated assessment models.
Abstract: . Projections of future climate change play a fundamental role in improving understanding of the climate system as well as characterizing societal risks and response options. The Scenario Model Intercomparison Project (ScenarioMIP) is the primary activity within Phase 6 of the Coupled Model Intercomparison Project (CMIP6) that will provide multi-model climate projections based on alternative scenarios of future emissions and land use changes produced with integrated assessment models. In this paper, we describe ScenarioMIP's objectives, experimental design, and its relation to other activities within CMIP6. The ScenarioMIP design is one component of a larger scenario process that aims to facilitate a wide range of integrated studies across the climate science, integrated assessment modeling, and impacts, adaptation, and vulnerability communities, and will form an important part of the evidence base in the forthcoming Intergovernmental Panel on Climate Change (IPCC) assessments. At the same time, it will provide the basis for investigating a number of targeted science and policy questions that are especially relevant to scenario-based analysis, including the role of specific forcings such as land use and aerosols, the effect of a peak and decline in forcing, the consequences of scenarios that limit warming to below 2 °C, the relative contributions to uncertainty from scenarios, climate models, and internal variability, and long-term climate system outcomes beyond the 21st century. To serve this wide range of scientific communities and address these questions, a design has been identified consisting of eight alternative 21st century scenarios plus one large initial condition ensemble and a set of long-term extensions, divided into two tiers defined by relative priority. Some of these scenarios will also provide a basis for variants planned to be run in other CMIP6-Endorsed MIPs to investigate questions related to specific forcings. Harmonized, spatially explicit emissions and land use scenarios generated with integrated assessment models will be provided to participating climate modeling groups by late 2016, with the climate model simulations run within the 2017–2018 time frame, and output from the climate model projections made available and analyses performed over the 2018–2020 period.

1,758 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems, which combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure.
Abstract: In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator ( $H^{*}H$ , where $H^{*}$ is the adjoint of the forward imaging operator, $H$ ) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a $512\times 512$ image on the GPU.

1,757 citations


Posted Content
TL;DR: This work identifies obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples, and develops attack techniques to overcome this effect.
Abstract: We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that cause obfuscated gradients appear to defeat iterative optimization-based attacks, we find defenses relying on this effect can be circumvented. We describe characteristic behaviors of defenses exhibiting the effect, and for each of the three types of obfuscated gradients we discover, we develop attack techniques to overcome it. In a case study, examining non-certified white-box-secure defenses at ICLR 2018, we find obfuscated gradients are a common occurrence, with 7 of 9 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers.

1,757 citations


Proceedings ArticleDOI
01 Jun 2016
TL;DR: In this article, the authors proposed an online hard example mining (OHEM) algorithm for training region-based ConvNet detectors and achieved state-of-the-art results.
Abstract: The field of object detection has made significant advances riding on the wave of region-based ConvNets, but their training procedure still includes many heuristics and hyperparameters that are costly to tune. We present a simple yet surprisingly effective online hard example mining (OHEM) algorithm for training region-based ConvNet detectors. Our motivation is the same as it has always been – detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more effective and efficient. OHEM is a simple and intuitive algorithm that eliminates several heuristics and hyperparameters in common use. But more importantly, it yields consistent and significant boosts in detection performance on benchmarks like PASCAL VOC 2007 and 2012. Its effectiveness increases as datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. Moreover, combined with complementary advances in the field, OHEM leads to state-of-the-art results of 78.9% and 76.3% mAP on PASCAL VOC 2007 and 2012 respectively.

Journal ArticleDOI
03 Sep 2015-Nature
TL;DR: S sulfur hydride is investigated, and it is argued that the phase responsible for high-Tc superconductivity in this system is likely to be H3S, formed from H2S by decomposition under pressure, which raises hope for the prospects for achieving room-temperature super conductivity in other hydrogen-based materials.
Abstract: A superconductor is a material that can conduct electricity without resistance below a superconducting transition temperature, Tc. The highest Tc that has been achieved to date is in the copper oxide system: 133 kelvin at ambient pressure and 164 kelvin at high pressures. As the nature of superconductivity in these materials is still not fully understood (they are not conventional superconductors), the prospects for achieving still higher transition temperatures by this route are not clear. In contrast, the Bardeen-Cooper-Schrieffer theory of conventional superconductivity gives a guide for achieving high Tc with no theoretical upper bound--all that is needed is a favourable combination of high-frequency phonons, strong electron-phonon coupling, and a high density of states. These conditions can in principle be fulfilled for metallic hydrogen and covalent compounds dominated by hydrogen, as hydrogen atoms provide the necessary high-frequency phonon modes as well as the strong electron-phonon coupling. Numerous calculations support this idea and have predicted transition temperatures in the range 50-235 kelvin for many hydrides, but only a moderate Tc of 17 kelvin has been observed experimentally. Here we investigate sulfur hydride, where a Tc of 80 kelvin has been predicted. We find that this system transforms to a metal at a pressure of approximately 90 gigapascals. On cooling, we see signatures of superconductivity: a sharp drop of the resistivity to zero and a decrease of the transition temperature with magnetic field, with magnetic susceptibility measurements confirming a Tc of 203 kelvin. Moreover, a pronounced isotope shift of Tc in sulfur deuteride is suggestive of an electron-phonon mechanism of superconductivity that is consistent with the Bardeen-Cooper-Schrieffer scenario. We argue that the phase responsible for high-Tc superconductivity in this system is likely to be H3S, formed from H2S by decomposition under pressure. These findings raise hope for the prospects for achieving room-temperature superconductivity in other hydrogen-based materials.

Journal ArticleDOI
09 Apr 2015-Nature
TL;DR: Six smaller Cas9 orthologues are characterized and it is shown that Cas9 from Staphylococcus aureus (SaCas9) can edit the genome with efficiencies similar to those of SpCas9, while being more than 1 kilobase shorter.
Abstract: The RNA-guided endonuclease Cas9 has emerged as a versatile genome-editing platform. However, the size of the commonly used Cas9 from Streptococcus pyogenes (SpCas9) limits its utility for basic research and therapeutic applications that use the highly versatile adeno-associated virus (AAV) delivery vehicle. Here, we characterize six smaller Cas9 orthologues and show that Cas9 from Staphylococcus aureus (SaCas9) can edit the genome with efficiencies similar to those of SpCas9, while being more than 1 kilobase shorter. We packaged SaCas9 and its single guide RNA expression cassette into a single AAV vector and targeted the cholesterol regulatory gene Pcsk9 in the mouse liver. Within one week of injection, we observed >40% gene modification, accompanied by significant reductions in serum Pcsk9 and total cholesterol levels. We further assess the genome-wide targeting specificity of SaCas9 and SpCas9 using BLESS, and demonstrate that SaCas9-mediated in vivo genome editing has the potential to be efficient and specific.

Journal ArticleDOI
TL;DR: The coronavirus disease 2019 (COVID‐19) pandemic has affected hundreds of thousands of people and data on symptoms and prognosis in children are rare.
Abstract: Aim: The coronavirus disease 2019 (COVID-19) pandemic has affected hundreds of thousands of people. Data on symptoms and prognosis in children are rare.Methods: A systematic literature review was c ...

Proceedings Article
04 Mar 2019
TL;DR: This work finds that dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations, and articulate the "lottery ticket hypothesis".
Abstract: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.

Posted ContentDOI
22 Dec 2016-bioRxiv
TL;DR: Tests on both synthetic and real reads show Unicycler can assemble larger contigs with fewer misassemblies than other hybrid assemblers, even when long read depth and accuracy are low.
Abstract: The Illumina DNA sequencing platform generates accurate but short reads, which can be used to produce accurate but fragmented genome assemblies. Pacific Biosciences and Oxford Nanopore Technologies DNA sequencing platforms generate long reads that can produce more complete genome assemblies, but the sequencing is more expensive and error prone. There is significant interest in combining data from these complementary sequencing technologies to generate more accurate "hybrid" assemblies. However, few tools exist that truly leverage the benefits of both types of data, namely the accuracy of short reads and the structural resolving power of long reads. Here we present Unicycler, a new tool for assembling bacterial genomes from a combination of short and long reads, which produces assemblies that are accurate, complete and cost-effective. Unicycler builds an initial assembly graph from short reads using the de novo assembler SPAdes and then simplifies the graph using information from short and long reads. Unicycler utilises a novel semi-global aligner, which is used to align long reads to the assembly graph. Tests on both synthetic and real reads show Unicycler can assemble larger contigs with fewer misassemblies than other hybrid assemblers, even when long read depth and accuracy are low. Unicycler is open source (GPLv3) and available at github.com/rrwick/Unicycler.

Journal ArticleDOI
Yashar Akrami1, Yashar Akrami2, M. Ashdown3, J. Aumont4  +180 moreInstitutions (59)
TL;DR: In this paper, a power-law fit to the angular power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky is presented.
Abstract: The study of polarized dust emission has become entwined with the analysis of the cosmic microwave background (CMB) polarization. We use new Planck maps to characterize Galactic dust emission as a foreground to the CMB polarization. We present Planck EE, BB, and TE power spectra of dust polarization at 353 GHz for six nested sky regions covering from 24 to 71 % of the sky. We present power-law fits to the angular power spectra, yielding evidence for statistically significant variations of the exponents over sky regions and a difference between the values for the EE and BB spectra. The TE correlation and E/B power asymmetry extend to low multipoles that were not included in earlier Planck polarization papers. We also report evidence for a positive TB dust signal. Combining data from Planck and WMAP, we determine the amplitudes and spectral energy distributions (SEDs) of polarized foregrounds, including the correlation between dust and synchrotron polarized emission, for the six sky regions as a function of multipole. This quantifies the challenge of the component separation procedure required for detecting the reionization and recombination peaks of primordial CMB B modes. The SED of polarized dust emission is fit well by a single-temperature modified blackbody emission law from 353 GHz to below 70 GHz. For a dust temperature of 19.6 K, the mean spectral index for dust polarization is $\beta_{\rm d}^{P} = 1.53\pm0.02 $. By fitting multi-frequency cross-spectra, we examine the correlation of the dust polarization maps across frequency. We find no evidence for decorrelation. If the Planck limit for the largest sky region applies to the smaller sky regions observed by sub-orbital experiments, then decorrelation might not be a problem for CMB experiments aiming at a primordial B-mode detection limit on the tensor-to-scalar ratio $r\simeq0.01$ at the recombination peak.

Journal ArticleDOI
03 Apr 2020
TL;DR: Random Erasing as mentioned in this paper randomly selects a rectangle region in an image and erases its pixels with random values, which reduces the risk of overfitting and makes the model robust to occlusion.
Abstract: In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: https://github.com/zhunzhong07/Random-Erasing.

Journal ArticleDOI
TL;DR: In this article, the authors describe the implementation of real-space refinement in the phenixreal_space-refine program from the PHENIX suite, which makes use of extra information such as secondary-structure and rotamer-specific restraints.
Abstract: This article describes the implementation of real-space refinement in the phenixreal_space_refine program from the PHENIX suite The use of a simplified refinement target function enables very fast calculation, which in turn makes it possible to identify optimal data-restraint weights as part of routine refinements with little runtime cost Refinement of atomic models against low-resolution data benefits from the inclusion of as much additional information as is available In addition to standard restraints on covalent geometry, phenixreal_space_refine makes use of extra information such as secondary-structure and rotamer-specific restraints, as well as restraints or constraints on internal molecular symmetry The re-refinement of 385 cryo-EM-derived models available in the Protein Data Bank at resolutions of 6 A or better shows significant improvement of the models and of the fit of these models to the target maps

Journal ArticleDOI
19 Nov 2015-Cell
TL;DR: A machine-learning algorithm is devised that integrates blood parameters, dietary habits, anthropometrics, physical activity, and gut microbiota measured in an 800-person cohort and shows that it accurately predicts personalized postprandial glycemic response to real-life meals, and a blinded randomized controlled dietary intervention based on this algorithm resulted in significantly lower postpr andial responses and consistent alterations to gut microbiota configuration.

Journal ArticleDOI
TL;DR: The most notable addition has been that Builder interface, allowing users to create studies with minimal or no programming, while also allowing the insertion of Python code for maximal flexibility.
Abstract: PsychoPy is an application for the creation of experiments in behavioral science (psychology, neuroscience, linguistics, etc.) with precise spatial control and timing of stimuli. It now provides a choice of interface; users can write scripts in Python if they choose, while those who prefer to construct experiments graphically can use the new Builder interface. Here we describe the features that have been added over the last 10 years of its development. The most notable addition has been that Builder interface, allowing users to create studies with minimal or no programming, while also allowing the insertion of Python code for maximal flexibility. We also present some of the other new features, including further stimulus options, asynchronous time-stamped hardware polling, and better support for open science and reproducibility. Tens of thousands of users now launch PsychoPy every month, and more than 90 people have contributed to the code. We discuss the current state of the project, as well as plans for the future.

Journal ArticleDOI
TL;DR: Liu et al. as mentioned in this paper discuss crucial conditions needed to achieve a specific energy higher than 350 Wh kg−1, up to 500 Wh kg −1, for rechargeable Li metal batteries using high-nickel-content lithium nickel manganese cobalt oxides as cathode materials.
Abstract: State-of-the-art lithium (Li)-ion batteries are approaching their specific energy limits yet are challenged by the ever-increasing demand of today’s energy storage and power applications, especially for electric vehicles. Li metal is considered an ultimate anode material for future high-energy rechargeable batteries when combined with existing or emerging high-capacity cathode materials. However, much current research focuses on the battery materials level, and there have been very few accounts of cell design principles. Here we discuss crucial conditions needed to achieve a specific energy higher than 350 Wh kg−1, up to 500 Wh kg−1, for rechargeable Li metal batteries using high-nickel-content lithium nickel manganese cobalt oxides as cathode materials. We also provide an analysis of key factors such as cathode loading, electrolyte amount and Li foil thickness that impact the cell-level cycle life. Furthermore, we identify several important strategies to reduce electrolyte-Li reaction, protect Li surfaces and stabilize anode architectures for long-cycling high-specific-energy cells. Jun Liu and Battery500 Consortium colleagues contemplate the way forward towards high-energy and long-cycling practical batteries.

Journal ArticleDOI
TL;DR: In this paper, the burden of infections caused by antibiotic-resistant bacteria of public health concern in countries of the EU and European Economic Area (EEA) in 2015, measured in number of cases, attributable deaths, and disability-adjusted life-years (DALYs).
Abstract: Summary Background Infections due to antibiotic-resistant bacteria are threatening modern health care. However, estimating their incidence, complications, and attributable mortality is challenging. We aimed to estimate the burden of infections caused by antibiotic-resistant bacteria of public health concern in countries of the EU and European Economic Area (EEA) in 2015, measured in number of cases, attributable deaths, and disability-adjusted life-years (DALYs). Methods We estimated the incidence of infections with 16 antibiotic resistance–bacterium combinations from European Antimicrobial Resistance Surveillance Network (EARS-Net) 2015 data that was country-corrected for population coverage. We multiplied the number of bloodstream infections (BSIs) by a conversion factor derived from the European Centre for Disease Prevention and Control point prevalence survey of health-care-associated infections in European acute care hospitals in 2011–12 to estimate the number of non-BSIs. We developed disease outcome models for five types of infection on the basis of systematic reviews of the literature. Findings From EARS-Net data collected between Jan 1, 2015, and Dec 31, 2015, we estimated 671 689 (95% uncertainty interval [UI] 583 148–763 966) infections with antibiotic-resistant bacteria, of which 63·5% (426 277 of 671 689) were associated with health care. These infections accounted for an estimated 33 110 (28 480–38 430) attributable deaths and 874 541 (768 837–989 068) DALYs. The burden for the EU and EEA was highest in infants (aged Interpretation Our results present the health burden of five types of infection with antibiotic-resistant bacteria expressed, for the first time, in DALYs. The estimated burden of infections with antibiotic-resistant bacteria in the EU and EEA is substantial compared with that of other infectious diseases, and has increased since 2007. Our burden estimates provide useful information for public health decision-makers prioritising interventions for infectious diseases. Funding European Centre for Disease Prevention and Control.

Journal ArticleDOI
TL;DR: In patients with type 2 diabetes and a recent acute coronary syndrome, the addition of lixisenatide to usual care did not significantly alter the rate of major cardiovascular events or other serious adverse events.
Abstract: BackgroundCardiovascular morbidity and mortality are higher among patients with type 2 diabetes, particularly those with concomitant cardiovascular diseases, than in most other populations. We assessed the effects of lixisenatide, a glucagon-like peptide 1–receptor agonist, on cardiovascular outcomes in patients with type 2 diabetes who had had a recent acute coronary event. MethodsWe randomly assigned patients with type 2 diabetes who had had a myocardial infarction or who had been hospitalized for unstable angina within the previous 180 days to receive lixisenatide or placebo in addition to locally determined standards of care. The trial was designed with adequate statistical power to assess whether lixisenatide was noninferior as well as superior to placebo, as defined by an upper boundary of the 95% confidence interval for the hazard ratio of less than 1.3 and 1.0, respectively, for the primary composite end point of cardiovascular death, myocardial infarction, stroke, or hospitalization for unstable ...

Journal ArticleDOI
TL;DR: It is found that the antibiotic consumption rate in low- and middle- income countries (LMICs) has been converging to (and in some countries surpassing) levels typically observed in high-income countries, and projected total global antibiotic consumption through 2030 was up to 200% higher than the 42 billion DDDs estimated in 2015.
Abstract: Tracking antibiotic consumption patterns over time and across countries could inform policies to optimize antibiotic prescribing and minimize antibiotic resistance, such as setting and enforcing per capita consumption targets or aiding investments in alternatives to antibiotics. In this study, we analyzed the trends and drivers of antibiotic consumption from 2000 to 2015 in 76 countries and projected total global antibiotic consumption through 2030. Between 2000 and 2015, antibiotic consumption, expressed in defined daily doses (DDD), increased 65% (21.1–34.8 billion DDDs), and the antibiotic consumption rate increased 39% (11.3–15.7 DDDs per 1,000 inhabitants per day). The increase was driven by low- and middle-income countries (LMICs), where rising consumption was correlated with gross domestic product per capita (GDPPC) growth (P = 0.004). In high-income countries (HICs), although overall consumption increased modestly, DDDs per 1,000 inhabitants per day fell 4%, and there was no correlation with GDPPC. Of particular concern was the rapid increase in the use of last-resort compounds, both in HICs and LMICs, such as glycylcyclines, oxazolidinones, carbapenems, and polymyxins. Projections of global antibiotic consumption in 2030, assuming no policy changes, were up to 200% higher than the 42 billion DDDs estimated in 2015. Although antibiotic consumption rates in most LMICs remain lower than in HICs despite higher bacterial disease burden, consumption in LMICs is rapidly converging to rates similar to HICs. Reducing global consumption is critical for reducing the threat of antibiotic resistance, but reduction efforts must balance access limitations in LMICs and take account of local and global resistance patterns.

Journal ArticleDOI
TL;DR: This work provides a systematic, practical approach to evaluating and comprehending nomogram-derived prognoses, with particular emphasis on clarifying common misconceptions and highlighting limitations.
Abstract: Nomograms are widely used as prognostic devices in oncology and medicine. With the ability to generate an individual probability of a clinical event by integrating diverse prognostic and determinant variables, nomograms meet our desire for biologically and clinically integrated models and fulfill our drive towards personalised medicine. Rapid computation through user-friendly digital interfaces, together with increased accuracy, and more easily understood prognoses compared with conventional staging, allow for seamless incorporation of nomogram-derived prognosis to aid clinical decision making. This has led to the appearance of many nomograms on the internet and in medical journals, and an increase in nomogram use by patients and physicians alike. However, the statistical foundations of nomogram construction, their precise interpretation, and evidence supporting their use are generally misunderstood. This issue is leading to an under-appreciation of the inherent uncertainties regarding nomogram use. We provide a systematic, practical approach to evaluating and comprehending nomogram-derived prognoses, with particular emphasis on clarifying common misconceptions and highlighting limitations.

Journal ArticleDOI
TL;DR: In this paper, the authors provide evidence-based recommendations to manage Otitis Media with effusion (OME), defined as the presence of fluid in the middle ear without signs or symptoms of acute ear infection.
Abstract: ObjectiveThis update of a 2004 guideline codeveloped by the American Academy of Otolaryngology—Head and Neck Surgery Foundation, the American Academy of Pediatrics, and the American Academy of Family Physicians, provides evidence-based recommendations to manage otitis media with effusion (OME), defined as the presence of fluid in the middle ear without signs or symptoms of acute ear infection. Changes from the prior guideline include consumer advocates added to the update group, evidence from 4 new clinical practice guidelines, 20 new systematic reviews, and 49 randomized control trials, enhanced emphasis on patient education and shared decision making, a new algorithm to clarify action statement relationships, and new and expanded recommendations for the diagnosis and management of OME.PurposeThe purpose of this multidisciplinary guideline is to identify quality improvement opportunities in managing OME and to create explicit and actionable recommendations to implement these opportunities in clinical pra...

Journal ArticleDOI
TL;DR: This Review summarizes dual-catalyst strategies that have been applied to synthetic photochemistry, and focuses upon the cooperative interactions of photocatalysts with redox mediators, Lewis and Brønsted acids, organocatalyst, enzymes, and transition metal complexes.
Abstract: The interaction between an electronically excited photocatalyst and an organic molecule can result in the genertion of a diverse array of reactive intermediates that can be manipulated in a variety of ways to result in synthetically useful bond constructions. This Review summarizes dual-catalyst strategies that have been applied to synthetic photochemistry. Mechanistically distinct modes of photocatalysis are discussed, including photoinduced electron transfer, hydrogen atom transfer, and energy transfer. We focus upon the cooperative interactions of photocatalysts with redox mediators, Lewis and Bronsted acids, organocatalysts, enzymes, and transition metal complexes.

Journal ArticleDOI
TL;DR: A random-effects model to summarize the evidence about treatment efficacy from a number of related clinical trials and a discussion of repurposing the method for Big Data meta-analysis and Genome Wide Association Studies for studying the importance of genetic variants in complex diseases are reviewed.

Journal ArticleDOI
09 Jun 2015-JAMA
TL;DR: How to identify patients with nonalcoholic fatty liver disease at greatest risk of non Alcoholic steatohepatitis and cirrhosis is illustrated and the role and limitations of current diagnostics and liver biopsy are discussed to provide an outline for the management of patients across the spectrum of non alcoholic fatty Liver disease.
Abstract: Importance Nonalcoholic fatty liver disease and its subtype nonalcoholic steatohepatitis affect approximately 30% and 5%, respectively, of the US population. In patients with nonalcoholic steatohepatitis, half of deaths are due to cardiovascular disease and malignancy, yet awareness of this remains low. Cirrhosis, the third leading cause of death in patients with nonalcoholic fatty liver disease, is predicted to become the most common indication for liver transplantation. Objectives To illustrate how to identify patients with nonalcoholic fatty liver disease at greatest risk of nonalcoholic steatohepatitis and cirrhosis; to discuss the role and limitations of current diagnostics and liver biopsy to diagnose nonalcoholic steatohepatitis; and to provide an outline for the management of patients across the spectrum of nonalcoholic fatty liver disease. Evidence Review PubMed was queried for published articles through February 28, 2015, using the search termsNAFLD and cirrhosis, mortality, biomarkers,andtreatment. A total of 88 references were selected, including 14 randomized clinical trials, 19 cohort or case-control studies, 1 population-based study, 2 practice guidelines, 7 meta-analyses, 43 classified as other, and 2 webpages. Findings Sixty-six percent of patients older than 50 years with diabetes or obesity are thought to have nonalcoholic steatohepatitis with advanced fibrosis. Even though the ability to identify the nonalcoholic steatohepatitis subtype within those with nonalcoholic fatty liver disease still requires liver biopsy, biomarkers to detect advanced fibrosis are increasingly reliable. Lifestyle modification is the foundation of treatment for patients with nonalcoholic steatosis. Available treatments with proven benefit include vitamin E, pioglitazone, and obeticholic acid; however, the effect size is modest ( Conclusions and Relevance Between 75 million and 100 million individuals in the United States are estimated to have nonalcoholic fatty liver disease and its potential morbidity extends beyond the liver. It is important that primary care physicians, endocrinologists, and other specialists be aware of the scope and long-term effects of the disease. Early identification of patients with nonalcoholic steatohepatitis may help improve patient outcomes through treatment intervention, including transplantation for those with decompensated cirrhosis.