scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: A brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders are provided.
Abstract: Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.

1,970 citations


Journal ArticleDOI
06 Apr 2016-PeerJ
TL;DR: This paper is a tutorial-style introduction to PyMC3, a new open source Probabilistic Programming framework written in Python that uses Theano to compute gradients via automatic dierentiation as well as compile probabilistic programs on-the-fly to C for increased speed.
Abstract: Probabilistic Programming allows for automatic Bayesian inference on user-defined probabilistic models. Recent advances in Markov chain Monte Carlo (MCMC) sampling allow inference on increasingly complex models. This class of MCMC, known as Hamiltonian Monte Carlo, requires gradient information which is often not readily available. PyMC3 is a new open source Probabilistic Programming framework written in Python that uses Theano to compute gradients via automatic dierentiation as well as compile probabilistic programs on-the-fly to C for increased speed. Contrary to other Probabilistic Programming languages, PyMC3 allows model specification directly in Python code. The lack of a domain specific language allows for great flexibility and direct interaction with the model. This paper is a tutorial-style introduction to this software package.

1,969 citations


Journal ArticleDOI
TL;DR: These recommendations address the best approaches for antibiotic stewardship programs to influence the optimal use of antibiotics.
Abstract: Evidence-based guidelines for implementation and measurement of antibiotic stewardship interventions in inpatient populations including long-term care were prepared by a multidisciplinary expert panel of the Infectious Diseases Society of America and the Society for Healthcare Epidemiology of America. The panel included clinicians and investigators representing internal medicine, emergency medicine, microbiology, critical care, surgery, epidemiology, pharmacy, and adult and pediatric infectious diseases specialties. These recommendations address the best approaches for antibiotic stewardship programs to influence the optimal use of antibiotics.

1,969 citations


Posted Content
TL;DR: This paper builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation, and draws the connection between target networks and overestimation bias.
Abstract: In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.

1,968 citations


DOI
01 Jan 2020

1,967 citations


Journal ArticleDOI
22 Jan 2015-Nature
TL;DR: The properties of this compound suggest a path towards developing antibiotics that are likely to avoid development of resistance, as well as several methods to grow uncultured organisms by cultivation in situ or by using specific growth factors.
Abstract: Antibiotic resistance is spreading faster than the introduction of new compounds into clinical practice, causing a public health crisis. Most antibiotics were produced by screening soil microorganisms, but this limited resource of cultivable bacteria was overmined by the 1960s. Synthetic approaches to produce antibiotics have been unable to replace this platform. Uncultured bacteria make up approximately 99% of all species in external environments, and are an untapped source of new antibiotics. We developed several methods to grow uncultured organisms by cultivation in situ or by using specific growth factors. Here we report a new antibiotic that we term teixobactin, discovered in a screen of uncultured bacteria. Teixobactin inhibits cell wall synthesis by binding to a highly conserved motif of lipid II (precursor of peptidoglycan) and lipid III (precursor of cell wall teichoic acid). We did not obtain any mutants of Staphylococcus aureus or Mycobacterium tuberculosis resistant to teixobactin. The properties of this compound suggest a path towards developing antibiotics that are likely to avoid development of resistance.

1,964 citations


Journal ArticleDOI
TL;DR: Among patients with chronic kidney disease, regardless of the presence or absence of diabetes, the risk of a composite of a sustained decline in the estimated GFR of at least 50%, end-stage kidney disease or death from renal or cardiovascular causes was significantly lower with dapagliflozin than with placebo.
Abstract: Background Patients with chronic kidney disease have a high risk of adverse kidney and cardiovascular outcomes. The effect of dapagliflozin in patients with chronic kidney disease, with or...

1,963 citations


Proceedings Article
02 Nov 2017
TL;DR: The Vector Quantised-Variational AutoEncoder (VQ-VAE) as discussed by the authors is a generative model that learns a discrete latent representation by using vector quantization.
Abstract: Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of ``posterior collapse'' -— where the latents are ignored when they are paired with a powerful autoregressive decoder -— typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations.

1,963 citations


Journal ArticleDOI
TL;DR: All of the major steps in RNA-seq data analysis are reviewed, including experimental design, quality control, read alignment, quantification of gene and transcript levels, visualization, differential gene expression, alternative splicing, functional analysis, gene fusion detection and eQTL mapping.
Abstract: RNA-sequencing (RNA-seq) has a wide variety of applications, but no single analysis pipeline can be used in all cases. We review all of the major steps in RNA-seq data analysis, including experimental design, quality control, read alignment, quantification of gene and transcript levels, visualization, differential gene expression, alternative splicing, functional analysis, gene fusion detection and eQTL mapping. We highlight the challenges associated with each step. We discuss the analysis of small RNAs and the integration of RNA-seq with other functional genomics techniques. Finally, we discuss the outlook for novel technologies that are changing the state of the art in transcriptomics.

1,963 citations


Posted Content
TL;DR: This article constructed several large-scale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results in text classification.
Abstract: This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several large-scale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks.

1,963 citations


Journal ArticleDOI
TL;DR: The present review tends to articulate important information on SOD, CAT and GPX as first line defense antioxidant enzymes.
Abstract: The body encloses a complex antioxidant defence grid that relies on endogenous enzymatic and non-enzymatic antioxidants. These molecules collectively act against free radicals to resist their damag...

Proceedings ArticleDOI
27 Jun 2016
TL;DR: A novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network using a large set of videos with tracking ground-truths to obtain a generic target representation.
Abstract: We propose a novel visual tracking algorithm based on the representations from a discriminatively trained Convolutional Neural Network (CNN). Our algorithm pretrains a CNN using a large set of videos with tracking groundtruths to obtain a generic target representation. Our network is composed of shared layers and multiple branches of domain-specific layers, where domains correspond to individual training sequences and each branch is responsible for binary classification to identify target in each domain. We train each domain in the network iteratively to obtain generic target representations in the shared layers. When tracking a target in a new sequence, we construct a new network by combining the shared layers in the pretrained CNN with a new binary classification layer, which is updated online. Online tracking is performed by evaluating the candidate windows randomly sampled around the previous target state. The proposed algorithm illustrates outstanding performance in existing tracking benchmarks.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate why it is inappropriate to describe nickel oxide or hydroxide and cobalt oxide/hydroxide as pseudocapacitive electrode materials, and demonstrate the difference between these two classes of materials.
Abstract: There are an increasing number of studies regarding active electrode materials that undergo faradaic reactions but are used for electrochemical capacitor applications. Unfortunately, some of these materials are described as “pseudocapacitive” materials despite the fact that their electrochemical signature (e.g., cyclic voltammogram and charge/discharge curve) is analogous to that of a “battery” material, as commonly observed for Ni(OH)2 and cobalt oxides in KOH electrolyte. Conversely, true pseudocapacitive electrode materials such as MnO2 display electrochemical behavior typical of that observed for a capacitive carbon electrode. The difference between these two classes of materials will be explained, and we demonstrate why it is inappropriate to describe nickel oxide or hydroxide and cobalt oxide/hydroxide as pseudocapacitive electrode materials. © The Author(s) 2015. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives 4.0 License (CC BY-NC-ND, http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reuse, distribution, and reproduction in any medium, provided the original work is not changed in any way and is properly cited. For permission for commercial reuse, please email: oa@electrochem.org. [DOI: 10.1149/2.0201505jes] All rights reserved.

Journal ArticleDOI
01 Jul 2017
TL;DR: A new architecture for a fully optical neural network is demonstrated that enables a computational speed enhancement of at least two orders of magnitude and three order of magnitude in power efficiency over state-of-the-art electronics.
Abstract: Artificial Neural Networks have dramatically improved performance for many machine learning tasks. We demonstrate a new architecture for a fully optical neural network that enables a computational speed enhancement of at least two orders of magnitude and three orders of magnitude in power efficiency over state-of-the-art electronics.

Proceedings ArticleDOI
05 Jan 2016
TL;DR: Design principles of Industrie 4.0 are identified so that academics may be enabled to further investigate on the topic, while practitioners may find assistance in identifying appropriate scenarios.
Abstract: The increasing integration of the Internet of Everything into the industrial value chain has built the foundation for the next industrial revolution called Industrie 4.0. Although Industrie 4.0 is currently a top priority for many companies, research centers, and universities, a generally accepted understanding of the term does not exist. As a result, discussing the topic on an academic level is difficult, and so is implementing Industrie 4.0 scenarios. Based on a quantitative text analysis and a qualitative literature review, the paper identifies design principles of Industrie 4.0. Taking into account these principles, academics may be enabled to further investigate on the topic, while practitioners may find assistance in identifying appropriate scenarios. A case study illustrates how the identified design principles support practitioners in identifying Industrie 4.0 scenarios.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the quantum mechanical model of $N$ Majorana fermions with random interactions of a few Fermions at a time (Sachdev-Ye-Kitaev model) in the large N$ limit.
Abstract: The authors study in detail the quantum mechanical model of $N$ Majorana fermions with random interactions of a few fermions at a time (Sachdev-Ye-Kitaev model) in the large $N$ limit. At low energies, the system is strongly interacting and an emergent conformal symmetry develops. Performing technical calculations, the authors elucidate a number of properties of the model near the conformal point.

Proceedings ArticleDOI
21 May 2015
TL;DR: A decentralized personal data management system that ensures users own and control their data is described, and a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party is implemented.
Abstract: The recent increase in reported incidents of surveillance and security breaches compromising users' privacy call into question the current model, in which third-parties collect and control massive amounts of personal data. Bit coin has demonstrated in the financial space that trusted, auditable computing is possible using a decentralized network of peers accompanied by a public ledger. In this paper, we describe a decentralized personal data management system that ensures users own and control their data. We implement a protocol that turns a block chain into an automated access-control manager that does not require trust in a third party. Unlike Bit coin, transactions in our system are not strictly financial -- they are used to carry instructions, such as storing, querying and sharing data. Finally, we discuss possible future extensions to block chains that could harness them into a well-rounded solution for trusted computing problems in society.

Posted Content
TL;DR: The propagation formulations behind the residual building blocks suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation.
Abstract: Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: this https URL

Journal ArticleDOI
TL;DR: This work argues for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives, in the hope that this will facilitate action toward improving the transparency, reproducible and efficiency of scientific research.
Abstract: Improving the reliability and efficiency of scientific research will increase the credibility of the published scientific literature and accelerate discovery. Here we argue for the adoption of measures to optimize key elements of the scientific process: methods, reporting and dissemination, reproducibility, evaluation and incentives. There is some evidence from both simulations and empirical studies supporting the likely effectiveness of these measures, but their broad adoption by researchers, institutions, funders and journals will require iterative evaluation and improvement. We discuss the goals of these measures, and how they can be implemented, in the hope that this will facilitate action toward improving the transparency, reproducibility and efficiency of scientific research.

Proceedings Article
07 Dec 2015
TL;DR: A new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence using a recently proposed mechanism of neural attention, called Ptr-Nets, which improves over sequence-to-sequence with input attention, but also allows it to generalize to variable size output dictionaries.
Abstract: We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence [1] and Neural Turing Machines [2], because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems - finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem - using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems.

Proceedings ArticleDOI
Yin Zhou1, Oncel Tuzel1
18 Jun 2018
TL;DR: Zhou et al. as mentioned in this paper propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network.
Abstract: Accurate detection of objects in 3D point clouds is a central problem in many applications, such as autonomous navigation, housekeeping robots, and augmented/virtual reality. To interface a highly sparse LiDAR point cloud with a region proposal network (RPN), most existing efforts have focused on hand-crafted feature representations, for example, a bird's eye view projection. In this work, we remove the need of manual feature engineering for 3D point clouds and propose VoxelNet, a generic 3D detection network that unifies feature extraction and bounding box prediction into a single stage, end-to-end trainable deep network. Specifically, VoxelNet divides a point cloud into equally spaced 3D voxels and transforms a group of points within each voxel into a unified feature representation through the newly introduced voxel feature encoding (VFE) layer. In this way, the point cloud is encoded as a descriptive volumetric representation, which is then connected to a RPN to generate detections. Experiments on the KITTI car detection benchmark show that VoxelNet outperforms the state-of-the-art LiDAR based 3D detection methods by a large margin. Furthermore, our network learns an effective discriminative representation of objects with various geometries, leading to encouraging results in 3D detection of pedestrians and cyclists, based on only LiDAR.

Posted Content
TL;DR: With simple modifications to MoCo, this note establishes stronger baselines that outperform SimCLR and do not require large training batches, and hopes this will make state-of-the-art unsupervised learning research more accessible.
Abstract: Contrastive unsupervised learning has recently shown encouraging progress, e.g., in Momentum Contrast (MoCo) and SimCLR. In this note, we verify the effectiveness of two of SimCLR's design improvements by implementing them in the MoCo framework. With simple modifications to MoCo---namely, using an MLP projection head and more data augmentation---we establish stronger baselines that outperform SimCLR and do not require large training batches. We hope this will make state-of-the-art unsupervised learning research more accessible. Code will be made public.

Proceedings ArticleDOI
18 Jun 2018
TL;DR: This work directly operates on raw point clouds by popping up RGBD scans and leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects.
Abstract: In this work, we study 3D object detection from RGBD data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability.

Journal ArticleDOI
24 Sep 2015-Cell
TL;DR: It is demonstrated that the disease-related RBP hnRNPA1 undergoes liquid-liquid phase separation (LLPS) into protein-rich droplets mediated by a low complexity sequence domain (LCD), and suggested that LCD-mediated LLPS contributes to the assembly of stress granules and their liquid properties.

Journal ArticleDOI
28 Apr 2017-Science
TL;DR: A Cas13a-based molecular detection platform, termed Specific High-Sensitivity Enzymatic Reporter UnLOCKing (SHERLOCK), is used to detect specific strains of Zika and Dengue virus, distinguish pathogenic bacteria, genotype human DNA, and identify mutations in cell-free tumor DNA.
Abstract: Rapid, inexpensive, and sensitive nucleic acid detection may aid point-of-care pathogen detection, genotyping, and disease monitoring. The RNA-guided, RNA-targeting clustered regularly interspaced short palindromic repeats (CRISPR) effector Cas13a (previously known as C2c2) exhibits a “collateral effect” of promiscuous ribonuclease activity upon target recognition. We combine the collateral effect of Cas13a with isothermal amplification to establish a CRISPR-based diagnostic (CRISPR-Dx), providing rapid DNA or RNA detection with attomolar sensitivity and single-base mismatch specificity. We use this Cas13a-based molecular detection platform, termed Specific High-Sensitivity Enzymatic Reporter UnLOCKing (SHERLOCK), to detect specific strains of Zika and Dengue virus, distinguish pathogenic bacteria, genotype human DNA, and identify mutations in cell-free tumor DNA. Furthermore, SHERLOCK reaction reagents can be lyophilized for cold-chain independence and long-term storage and be readily reconstituted on paper for field applications.

Journal ArticleDOI
TL;DR: Investigators in Wuhan, China, describe the spectrum of Covid-19 illness in children under the age of 16 years in SARS-CoV-2 Infection in Children.
Abstract: SARS-CoV-2 Infection in Children In this report, investigators in Wuhan, China, describe the spectrum of Covid-19 illness in children under the age of 16 years. Of 1391 children assessed and tested...

Journal ArticleDOI
TL;DR: The rationale for angiotensin-converting enzyme 2 (ACE2) receptor as a specific target is reviewed, and a number of pharmaceuticals already being tested are tested but a better understanding of the underlying pathobiology is required.
Abstract: A novel infectious disease, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was detected in Wuhan, China, in December 2019. The disease (COVID-19) spread rapidly, reaching epidemic proportions in China, and has been found in 27 other countries. As of February 27, 2020, over 82,000 cases of COVID-19 were reported, with > 2800 deaths. No specific therapeutics are available, and current management includes travel restrictions, patient isolation, and supportive medical care. There are a number of pharmaceuticals already being tested [1, 2], but a better understanding of the underlying pathobiology is required. In this context, this article will briefly review the rationale for angiotensin-converting enzyme 2 (ACE2) receptor as a specific target.

Journal ArticleDOI
TL;DR: In this article, a weak distance prior is used to estimate the distances to all 1.33 billion stars with parallaxes published in the second Gaia data release, and the uncertainty in the distance estimate is characterized by the lower and upper bounds of an asymmetric confidence interval.
Abstract: For the vast majority of stars in the second Gaia data release, reliable distances cannot be obtained by inverting the parallax. A correct inference procedure must instead be used to account for the nonlinearity of the transformation and the asymmetry of the resulting probability distribution. Here, we infer distances to essentially all 1.33 billion stars with parallaxes published in the second Gaia data release. This is done using a weak distance prior that varies smoothly as a function of Galactic longitude and latitude according to a Galaxy model. The irreducible uncertainty in the distance estimate is characterized by the lower and upper bounds of an asymmetric confidence interval. Although more precise distances can be estimated for a subset of the stars using additional data (such as photometry), our goal is to provide purely geometric distance estimates, independent of assumptions about the physical properties of, or interstellar extinction toward, individual stars. We analyze the characteristics of the catalog and validate it using clusters. The catalog can be queried using ADQL at http://gaia.ari.uni-heidelberg.de/tap.html (which also hosts the Gaia catalog) and downloaded from http://www.mpia.de/~calj/gdr2_distances.html.

Journal ArticleDOI
TL;DR: The NESARC-III data indicate an urgent need to educate the public and policy makers about AUD and its treatment alternatives, to destigmatize the disorder, and to encourage those who cannot reduce their alcohol consumption on their own, despite substantial harm to themselves and others, to seek treatment.
Abstract: Importance National epidemiologic information from recently collected data on the newDSM-5classification of alcohol use disorder (AUD) using a reliable, valid, and uniform data source is needed. Objective To present nationally representative findings on the prevalence, correlates, psychiatric comorbidity, associated disability, and treatment ofDSM-5AUD diagnoses overall and according to severity level (mild, moderate, or severe). Design, Setting, and Participants We conducted face-to-face interviews with a representative US noninstitutionalized civilian adult (≥18 years) sample (N = 36 309) as the 2012-2013 National Epidemiologic Survey on Alcohol and Related Conditions III (NESARC-III). Data were collected from April 2012 through June 2013 and analyzed in October 2014. Main Outcomes and Measures Twelve-month and lifetime prevalences of AUD. Results Twelve-month and lifetime prevalences of AUD were 13.9% and 29.1%, respectively. Prevalence was generally highest for men (17.6% and 36.0%, respectively), white (14.0% and 32.6%, respectively) and Native American (19.2% and 43.4%, respectively), respondents, and younger (26.7% and 37.0%, respectively) and previously married (11.4% and 27.1%, respectively) or never married (25.0% and 35.5%, respectively) adults. Prevalence of 12-month and lifetime severe AUD was greatest among respondents with the lowest income level (1.8% and 1.5%, respectively). Significant disability was associated with 12-month and lifetime AUD and increased with the severity of AUD. Only 19.8% of respondents with lifetime AUD were ever treated. Significant associations were found between 12-month and lifetime AUD and other substance use disorders, major depressive and bipolar I disorders, and antisocial and borderline personality disorders across all levels of AUD severity, with odds ratios ranging from 1.2 (95% CI, 1.08-1.36) to 6.4 (95% CI, 5.76-7.22). Associations between AUD and panic disorder, specific phobia, and generalized anxiety disorder were modest (odds ratios ranged from 1.2 (95% CI, 1.01-1.43) to 1.4 (95% CI, 1.13-1.67) across most levels of AUD severity. Conclusions and Relevance Alcohol use disorder defined byDSM-5criteria is a highly prevalent, highly comorbid, disabling disorder that often goes untreated in the United States. The NESARC-III data indicate an urgent need to educate the public and policy makers about AUD and its treatment alternatives, to destigmatize the disorder, and to encourage those who cannot reduce their alcohol consumption on their own, despite substantial harm to themselves and others, to seek treatment.

Journal ArticleDOI
TL;DR: The current outbreak of the novel coronavirus Covid-19 (coronavirus disease 2019; previously 2019-nCoV), epi-centered in Hubei Province of the People's Republic of China, has spread to many other countries and the incidence in other Asian countries, in Europe and North America remains low so far.
Abstract: The current outbreak of the novel coronavirus Covid-19 (coronavirus disease 2019; previously 2019-nCoV), epi-centered in Hubei Province of the People's Republic of China, has spread to many other countries. On January 30, 2020, the WHO Emergency Committee declared a global health emergency based on growing case notification rates at Chinese and international locations. The case detection rate is changing hourly and daily and can be tracked in almost real time on website provided by Johns Hopkins University [1] and other websites. As of early February 2020, China bears the large burden of morbidity and mortality, whereas the incidence in other Asian countries, in Europe and North America remains low so far.