scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: A phylogenetic network of SARS-CoV-2 genomes sampled from across the world faithfully traces routes of infections for documented coronavirus disease 2019 (COVID-19) cases, indicating that phylogenetic networks can likewise be successfully used to help trace undocumented CO VID-19 infection sources, which can be quarantined to prevent recurrent spread of the disease worldwide.
Abstract: In a phylogenetic network analysis of 160 complete human severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2) genomes, we find three central variants distinguished by amino acid changes, which we have named A, B, and C, with A being the ancestral type according to the bat outgroup coronavirus. The A and C types are found in significant proportions outside East Asia, that is, in Europeans and Americans. In contrast, the B type is the most common type in East Asia, and its ancestral genome appears not to have spread outside East Asia without first mutating into derived B types, pointing to founder effects or immunological or environmental resistance against this type outside Asia. The network faithfully traces routes of infections for documented coronavirus disease 2019 (COVID-19) cases, indicating that phylogenetic networks can likewise be successfully used to help trace undocumented COVID-19 infection sources, which can then be quarantined to prevent recurrent spread of the disease worldwide.

867 citations


Journal ArticleDOI
TL;DR: The results suggest that elevated N and P inputs lead to predictable shifts in the taxonomic and functional traits of soil microbial communities, including increases in the relative abundances of faster-growing, copiotrophic bacterial taxa, with these shifts likely to impact belowground ecosystems worldwide.
Abstract: Soil microorganisms are critical to ecosystem functioning and the maintenance of soil fertility. However, despite global increases in the inputs of nitrogen (N) and phosphorus (P) to ecosystems due to human activities, we lack a predictive understanding of how microbial communities respond to elevated nutrient inputs across environmental gradients. Here we used high-throughput sequencing of marker genes to elucidate the responses of soil fungal, archaeal, and bacterial communities using an N and P addition experiment replicated at 25 globally distributed grassland sites. We also sequenced metagenomes from a subset of the sites to determine how the functional attributes of bacterial communities change in response to elevated nutrients. Despite strong compositional differences across sites, microbial communities shifted in a consistent manner with N or P additions, and the magnitude of these shifts was related to the magnitude of plant community responses to nutrient inputs. Mycorrhizal fungi and methanogenic archaea decreased in relative abundance with nutrient additions, as did the relative abundances of oligotrophic bacterial taxa. The metagenomic data provided additional evidence for this shift in bacterial life history strategies because nutrient additions decreased the average genome sizes of the bacterial community members and elicited changes in the relative abundances of representative functional genes. Our results suggest that elevated N and P inputs lead to predictable shifts in the taxonomic and functional traits of soil microbial communities, including increases in the relative abundances of faster-growing, copiotrophic bacterial taxa, with these shifts likely to impact belowground ecosystems worldwide.

867 citations


Posted Content
TL;DR: This work develops CodeBERT with Transformer-based neural architecture, and trains it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators.
Abstract: We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing.

867 citations


Book
04 Aug 2015
TL;DR: Comprehensive yet accessible, this book offers a unique introduction to anyone interested in understanding how to model and forecast the range of choices made by individuals and groups.
Abstract: The second edition of this popular book brings students fully up to date with the latest methods and techniques in choice analysis Comprehensive yet accessible, it offers a unique introduction to anyone interested in understanding how to model and forecast the range of choices made by individuals and groups In addition to a complete rewrite of several chapters, new topics covered include ordered choice, scaled MNL, generalized mixed logit, latent class models, group decision making, heuristics and attribute processing strategies, expected utility theory, and prospect theoretic applications Many additional case studies are used to illustrate the applications of choice analysis with extensive command syntax provided for all Nlogit applications and datasets available online With its unique blend of theory, estimation, and application, this book has broad appeal to all those interested in choice modeling methods and will be a valuable resource for students as well as researchers, professionals, and consultants

867 citations


Journal ArticleDOI
TL;DR: The analysis quantifies the enormity of the clinical and economic burdens of NAFLD, which will likely increase as the incidence ofNAFLD continues to rise.

867 citations


Journal ArticleDOI
TL;DR: In this paper, the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function is analyzed, subject to coupled linear equality constraints.
Abstract: In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, $$\phi (x_0,\ldots ,x_p,y)$$ , subject to coupled linear equality constraints. Our ADMM updates each of the primal variables $$x_0,\ldots ,x_p,y$$ , followed by updating the dual variable. We separate the variable y from $$x_i$$ ’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, $$\ell _q$$ quasi-norm, Schatten-q quasi-norm ( $$0

867 citations


Journal ArticleDOI
TL;DR: RBO Hand 2 is presented, a highly compliant, underactuated, robust, and dexterous anthropomorphic hand that is inexpensive to manufacture and the morphology can easily be adapted to specific applications, and it is demonstrated that complex grasping behavior can be achieved with relatively simple control.
Abstract: The usefulness and versatility of a robotic end-effector depends on the diversity of grasps it can accomplish and also on the complexity of the control methods required to achieve them. We believe that soft hands are able to provide diverse and robust grasping with low control complexity. They possess many mechanical degrees of freedom and are able to implement complex deformations. At the same time, due to the inherent compliance of soft materials, only very few of these mechanical degrees have to be controlled explicitly. Soft hands therefore may combine the best of both worlds. In this paper, we present RBO Hand 2, a highly compliant, underactuated, robust, and dexterous anthropomorphic hand. The hand is inexpensive to manufacture and the morphology can easily be adapted to specific applications. To enable efficient hand design, we derive and evaluate computational models for the mechanical properties of the hand's basic building blocks, called PneuFlex actuators. The versatility of RBO Hand 2 is evaluated by implementing the comprehensive Feix taxonomy of human grasps. The manipulator's capabilities and limits are demonstrated using the Kapandji test and grasping experiments with a variety of objects of varying weight. Furthermore, we demonstrate that the effective dimensionality of grasp postures exceeds the dimensionality of the actuation signals, illustrating that complex grasping behavior can be achieved with relatively simple control.

867 citations


Journal ArticleDOI
TL;DR: The results indicate that G. lucidum and its high molecular weight polysaccharides may be used as prebiotic agents to prevent gut dysbiosis and obesity-related metabolic disorders in obese individuals.
Abstract: Obesity is associated with low-grade chronic inflammation and intestinal dysbiosis. Ganoderma lucidum is a medicinal mushroom used in traditional Chinese medicine with putative anti-diabetic effects. Here, we show that a water extract of Ganoderma lucidum mycelium (WEGL) reduces body weight, inflammation and insulin resistance in mice fed a high-fat diet (HFD). Our data indicate that WEGL not only reverses HFD-induced gut dysbiosis-as indicated by the decreased Firmicutes-to-Bacteroidetes ratios and endotoxin-bearing Proteobacteria levels-but also maintains intestinal barrier integrity and reduces metabolic endotoxemia. The anti-obesity and microbiota-modulating effects are transmissible via horizontal faeces transfer from WEGL-treated mice to HFD-fed mice. We further show that high molecular weight polysaccharides (>300 kDa) isolated from the WEGL extract produce similar anti-obesity and microbiota-modulating effects. Our results indicate that G. lucidum and its high molecular weight polysaccharides may be used as prebiotic agents to prevent gut dysbiosis and obesity-related metabolic disorders in obese individuals.

867 citations


Journal ArticleDOI
TL;DR: It can be concluded that insulin resistance in the myocardium generates damage by at least three different mechanisms: (1) signal transduction alteration, (2) impaired regulation of substrate metabolism, and (3) altered delivery of substrates to theMyocardium.
Abstract: For many years, cardiovascular disease (CVD) has been the leading cause of death around the world. Often associated with CVD are comorbidities such as obesity, abnormal lipid profiles and insulin resistance. Insulin is a key hormone that functions as a regulator of cellular metabolism in many tissues in the human body. Insulin resistance is defined as a decrease in tissue response to insulin stimulation thus insulin resistance is characterized by defects in uptake and oxidation of glucose, a decrease in glycogen synthesis, and, to a lesser extent, the ability to suppress lipid oxidation. Literature widely suggests that free fatty acids are the predominant substrate used in the adult myocardium for ATP production, however, the cardiac metabolic network is highly flexible and can use other substrates, such as glucose, lactate or amino acids. During insulin resistance, several metabolic alterations induce the development of cardiovascular disease. For instance, insulin resistance can induce an imbalance in glucose metabolism that generates chronic hyperglycemia, which in turn triggers oxidative stress and causes an inflammatory response that leads to cell damage. Insulin resistance can also alter systemic lipid metabolism which then leads to the development of dyslipidemia and the well-known lipid triad: (1) high levels of plasma triglycerides, (2) low levels of high-density lipoprotein, and (3) the appearance of small dense low-density lipoproteins. This triad, along with endothelial dysfunction, which can also be induced by aberrant insulin signaling, contribute to atherosclerotic plaque formation. Regarding the systemic consequences associated with insulin resistance and the metabolic cardiac alterations, it can be concluded that insulin resistance in the myocardium generates damage by at least three different mechanisms: (1) signal transduction alteration, (2) impaired regulation of substrate metabolism, and (3) altered delivery of substrates to the myocardium. The aim of this review is to discuss the mechanisms associated with insulin resistance and the development of CVD. New therapies focused on decreasing insulin resistance may contribute to a decrease in both CVD and atherosclerotic plaque generation.

867 citations


Posted Content
TL;DR: The authors found differences as high as 1.8 between commonly used configurations of the BLEU score between different tokenization and normalization schemes applied to the reference, and suggested that machine translation researchers settle upon the standard WMT scheme, which does not allow for user-supplied reference processing.
Abstract: The field of machine translation faces an under-recognized problem because of inconsistency in the reporting of scores from its dominant metric. Although people refer to "the" BLEU score, BLEU is in fact a parameterized metric whose values can vary wildly with changes to these parameters. These parameters are often not reported or are hard to find, and consequently, BLEU scores between papers cannot be directly compared. I quantify this variation, finding differences as high as 1.8 between commonly used configurations. The main culprit is different tokenization and normalization schemes applied to the reference. Pointing to the success of the parsing community, I suggest machine translation researchers settle upon the BLEU scheme used by the annual Conference on Machine Translation (WMT), which does not allow for user-supplied reference processing, and provide a new tool, SacreBLEU, to facilitate this.

867 citations


Journal ArticleDOI
TL;DR: In this article, a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps is presented, and the authors compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets.
Abstract: Deep neural networks (DNNs) have demonstrated impressive performance in complex machine learning tasks such as image classification or speech recognition. However, due to their multilayer nonlinear structure, they are not transparent, i.e., it is hard to grasp what makes them arrive at a particular classification or recognition decision, given a new unseen data sample. Recently, several approaches have been proposed enabling one to understand and interpret the reasoning embodied in a DNN for a single test image. These methods quantify the “importance” of individual pixels with respect to the classification decision and allow a visualization in terms of a heatmap in pixel/input space. While the usefulness of heatmaps can be judged subjectively by a human, an objective quality measure is missing. In this paper, we present a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps. We compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets. Our main result is that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. We provide theoretical arguments to explain this result and discuss its practical implications. Finally, we investigate the use of heatmaps for unsupervised assessment of the neural network performance.

Posted Content
TL;DR: Co-teaching as discussed by the authors trains two deep neural networks simultaneously, and let them teach each other given every mini-batch: first, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this minibatch should be used for training; finally, each networks back propagates the data selected by its peer network and updates itself.
Abstract: Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training. Nonetheless, recent studies on the memorization effects of deep neural networks show that they would first memorize training data of clean labels and then those of noisy labels. Therefore in this paper, we propose a new deep learning paradigm called Co-teaching for combating with noisy labels. Namely, we train two deep neural networks simultaneously, and let them teach each other given every mini-batch: firstly, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this mini-batch should be used for training; finally, each network back propagates the data selected by its peer network and updates itself. Empirical results on noisy versions of MNIST, CIFAR-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.

Journal ArticleDOI
TL;DR: This work focuses on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries and study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.
Abstract: Artificial neural networks thrive in solving the classification problem for a particular rigid task, acquiring knowledge through generalized learning behaviour from a distinct training phase. The resulting network resembles a static entity of knowledge, with endeavours to extend this knowledge without targeting the original task resulting in a catastrophic forgetting. Continual learning shifts this paradigm towards networks that can continually accumulate knowledge over different tasks without the need to retrain from scratch. We focus on task incremental classification, where tasks arrive sequentially and are delineated by clear boundaries. Our main contributions concern 1) a taxonomy and extensive overview of the state-of-the-art, 2) a novel framework to continually determine the stability-plasticity trade-off of the continual learner, 3) a comprehensive experimental comparison of 11 state-of-the-art continual learning methods and 4 baselines. We empirically scrutinize method strengths and weaknesses on three benchmarks, considering Tiny Imagenet and large-scale unbalanced iNaturalist and a sequence of recognition datasets. We study the influence of model capacity, weight decay and dropout regularization, and the order in which the tasks are presented, and qualitatively compare methods in terms of required memory, computation time and storage.

Journal ArticleDOI
TL;DR: In this article, the authors present an updated summary of the penalized pixel-fitting (pPXF) method, which is used to extract the stellar and gas kinematics, as well as the stellar population of galaxies via full spectrum fitting.
Abstract: I start by providing an updated summary of the penalized pixel-fitting (pPXF) method, which is used to extract the stellar and gas kinematics, as well as the stellar population of galaxies, via full spectrum fitting. I then focus on the problem of extracting the kinematic when the velocity dispersion $\sigma$ is smaller than the velocity sampling $\Delta V$, which is generally, by design, close to the instrumental dispersion $\sigma_{\rm inst}$. The standard approach consists of convolving templates with a discretized kernel, while fitting for its parameters. This is obviously very inaccurate when $\sigma<\Delta V/2$, due to undersampling. Oversampling can prevent this, but it has drawbacks. Here I present a more accurate and efficient alternative. It avoids the evaluation of the under-sampled kernel, and instead directly computes its well-sampled analytic Fourier transform, for use with the convolution theorem. A simple analytic transform exists when the kernel is described by the popular Gauss-Hermite parametrization (which includes the Gaussian as special case) for the line-of-sight velocity distribution. I describe how this idea was implemented in a significant upgrade to the publicly available pPXF software. The key advantage of the new approach is that it provides accurate velocities regardless of $\sigma$. This is important e.g. for spectroscopic surveys targeting galaxies with $\sigma\ll\sigma_{\rm inst}$, for galaxy redshift determinations, or for measuring line-of-sight velocities of individual stars. The proposed method could also be used to fix Gaussian convolution algorithms used in today's popular software packages.

Posted Content
TL;DR: The Reformer as discussed by the authors uses locality-sensitive hashing to improve the efficiency of Transformers and achieves state-of-the-art results on a number of tasks, but training these models can be prohibitively costly.
Abstract: Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.

Journal ArticleDOI
11 Jul 2016-Nature
TL;DR: In this paper, the authors performed whole-genome sequencing in 2,657 European individuals with and without diabetes, and exome sequencing for 12,940 individuals from five ancestry groups.
Abstract: The genetic architecture of common traits, including the number, frequency, and effect sizes of inherited variants that contribute to individual risk, has been long debated. Genome-wide association studies have identified scores of common variants associated with type 2 diabetes, but in aggregate, these explain only a fraction of the heritability of this disease. Here, to test the hypothesis that lower-frequency variants explain much of the remainder, the GoT2D and T2D-GENES consortia performed whole-genome sequencing in 2,657 European individuals with and without diabetes, and exome sequencing in 12,940 individuals from five ancestry groups. To increase statistical power, we expanded the sample size via genotyping and imputation in a further 111,548 subjects. Variants associated with type 2 diabetes after sequencing were overwhelmingly common and most fell within regions previously identified by genome-wide association studies. Comprehensive enumeration of sequence variation is necessary to identify functional alleles that provide important clues to disease pathophysiology, but large-scale sequencing does not support the idea that lower-frequency variants have a major role in predisposition to type 2 diabetes.

Journal ArticleDOI
TL;DR: Enzalutamide was associated with significantly longer progression-free and overall survival than standard care in men with metastatic, hormone-sensitive prostate cancer receiving testosterone suppression.
Abstract: Background Enzalutamide, an androgen-receptor inhibitor, has been associated with improved overall survival in men with castration-resistant prostate cancer. It is not known whether adding enzalutamide to testosterone suppression, with or without early docetaxel, will improve survival in men with metastatic, hormone-sensitive prostate cancer. Methods In this open-label, randomized, phase 3 trial, we assigned patients to receive testosterone suppression plus either open-label enzalutamide or a standard nonsteroidal antiandrogen therapy (standard-care group). The primary end point was overall survival. Secondary end points included progression-free survival as determined by the prostate-specific antigen (PSA) level, clinical progression-free survival, and adverse events. Results A total of 1125 men underwent randomization; the median follow-up was 34 months. There were 102 deaths in the enzalutamide group and 143 deaths in the standard-care group (hazard ratio, 0.67; 95% confidence interval [CI], 0.52 to 0.86; P = 0.002). Kaplan-Meier estimates of overall survival at 3 years were 80% (based on 94 events) in the enzalutamide group and 72% (based on 130 events) in the standard-care group. Better results with enzalutamide were also seen in PSA progression-free survival (174 and 333 events, respectively; hazard ratio, 0.39; P Conclusions Enzalutamide was associated with significantly longer progression-free and overall survival than standard care in men with metastatic, hormone-sensitive prostate cancer receiving testosterone suppression. The enzalutamide group had a higher incidence of seizures and other toxic effects, especially among those treated with early docetaxel. (Funded by Astellas Scientific and Medical Affairs and others; ENZAMET (ANZUP 1304) ANZCTR number, ACTRN12614000110684; ClinicalTrials.gov number, NCT02446405; and EU Clinical Trials Register number, 2014-003190-42.).

Journal ArticleDOI
TL;DR: This paper proposes to invoke an IRS at the cell boundary of multiple cells to assist the downlink transmission to cell-edge users, whilst mitigating the inter-cell interference, which is a crucial issue in multicell communication systems.
Abstract: Intelligent reflecting surfaces (IRSs) constitute a disruptive wireless communication technique capable of creating a controllable propagation environment. In this paper, we propose to invoke an IRS at the cell boundary of multiple cells to assist the downlink transmission to cell-edge users, whilst mitigating the inter-cell interference, which is a crucial issue in multicell communication systems. We aim for maximizing the weighted sum rate (WSR) of all users through jointly optimizing the active precoding matrices at the base stations (BSs) and the phase shifts at the IRS subject to each BS’s power constraint and unit modulus constraint. Both the BSs and the users are equipped with multiple antennas, which enhances the spectral efficiency by exploiting the spatial multiplexing gain. Due to the non-convexity of the problem, we first reformulate it into an equivalent one, which is solved by using the block coordinate descent (BCD) algorithm, where the precoding matrices and phase shifts are alternately optimized. The optimal precoding matrices can be obtained in closed form, when fixing the phase shifts. A pair of efficient algorithms are proposed for solving the phase shift optimization problem, namely the Majorization-Minimization (MM) Algorithm and the Complex Circle Manifold (CCM) Method. Both algorithms are guaranteed to converge to at least locally optimal solutions. We also extend the proposed algorithms to the more general multiple-IRS and network MIMO scenarios. Finally, our simulation results confirm the advantages of introducing IRSs in enhancing the cell-edge user performance.

Journal ArticleDOI
TL;DR: There is conflicting evidence to recommend gonadotrophin-releasing hormone agonists (GnRHa) and other means of ovarian suppression for fertility preservation and the panel notes that the field of ovarian tissue cryopreservation is advancing quickly and may evolve to become standard therapy in the future.
Abstract: PurposeTo provide current recommendations about fertility preservation for adults and children with cancer.MethodsA systematic review of the literature published from January 2013 to March 2017 was completed using PubMed and the Cochrane Library. An Update Panel reviewed the identified publications.ResultsThere were 61 publications identified and reviewed. None of these publications prompted a significant change in the 2013 recommendations.RecommendationsHealth care providers should initiate the discussion on the possibility of infertility with patients with cancer treated during their reproductive years or with parents/guardians of children as early as possible. Providers should be prepared to discuss fertility preservation options and/or to refer all potential patients to appropriate reproductive specialists. Although patients may be focused initially on their cancer diagnosis, providers should advise patients regarding potential threats to fertility as early as possible in the treatment process so as t...

Proceedings ArticleDOI
17 May 2015
TL;DR: In this paper, the authors provide a systematic exposition of Bit coin and the many related crypto currencies or "altcoins" and identify three key components of BitCoin's design that can be decoupled, which enables a more insightful analysis of Bitcoin's properties and future stability.
Abstract: Bit coin has emerged as the most successful cryptographic currency in history. Within two years of its quiet launch in 2009, Bit coin grew to comprise billions of dollars of economic value despite only cursory analysis of the system's design. Since then a growing literature has identified hidden-but-important properties of the system, discovered attacks, proposed promising alternatives, and singled out difficult future challenges. Meanwhile a large and vibrant open-source community has proposed and deployed numerous modifications and extensions. We provide the first systematic exposition Bit coin and the many related crypto currencies or 'altcoins.' Drawing from a scattered body of knowledge, we identify three key components of Bit coin's design that can be decoupled. This enables a more insightful analysis of Bit coin's properties and future stability. We map the design space for numerous proposed modifications, providing comparative analyses for alternative consensus mechanisms, currency allocation mechanisms, computational puzzles, and key management tools. We survey anonymity issues in Bit coin and provide an evaluation framework for analyzing a variety of privacy-enhancing proposals. Finally we provide new insights on what we term disinter mediation protocols, which absolve the need for trusted intermediaries in an interesting set of applications. We identify three general disinter mediation strategies and provide a detailed comparison.

Journal ArticleDOI
TL;DR: ISOLDE is an interactive molecular-dynamics environment for rebuilding models against experimental cryo-EM or crystallographic maps and reinforces the need for great care when validating models built into low-resolution data.
Abstract: This paper introduces ISOLDE, a new software package designed to provide an intuitive environment for high-fidelity interactive remodelling/refinement of macromolecular models into electron-density maps. ISOLDE combines interactive molecular-dynamics flexible fitting with modern molecular-graphics visualization and established structural biology libraries to provide an immersive interface wherein the model constantly acts to maintain physically realistic conformations as the user interacts with it by directly tugging atoms with a mouse or haptic interface or applying/removing restraints. In addition, common validation tasks are accelerated and visualized in real time. Using the recently described 3.8 A resolution cryo-EM structure of the eukaryotic minichromosome maintenance (MCM) helicase complex as a case study, it is demonstrated how ISOLDE can be used alongside other modern refinement tools to avoid common pitfalls of low-resolution modelling and improve the quality of the final model. A detailed analysis of changes between the initial and final model provides a somewhat sobering insight into the dangers of relying on a small number of validation metrics to judge the quality of a low-resolution model.

Book ChapterDOI
08 Oct 2016
TL;DR: This work proposes a novel Hollywood in Homes approach to collect data, collecting a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities, and evaluates and provides baseline results for several tasks including action recognition and automatic description generation.
Abstract: Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation. Following this procedure we collect a new dataset, Charades, with hundreds of people recording videos in their own homes, acting out casual everyday activities. The dataset is composed of 9,848 annotated videos with an average length of 30 s, showing activities of 267 people from three continents. Each video is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacted objects. In total, Charades provides 27,847 video descriptions, 66,500 temporally localized intervals for 157 action classes and 41,104 labels for 46 object classes. Using this rich data, we evaluate and provide baseline results for several tasks including action recognition and automatic description generation. We believe that the realism, diversity, and casual nature of this dataset will present unique challenges and new opportunities for computer vision community.

Journal ArticleDOI
TL;DR: An in depth study on the performance of deep learning based radio signal classification for radio communications signals considers a rigorous baseline method using higher order moments and strong boosted gradient tree classification, and compares performance between the two approaches across a range of configurations and channel impairments.
Abstract: We conduct an in depth study on the performance of deep learning based radio signal classification for radio communications signals. We consider a rigorous baseline method using higher order moments and strong boosted gradient tree classification, and compare performance between the two approaches across a range of configurations and channel impairments. We consider the effects of carrier frequency offset, symbol rate, and multipath fading in simulation, and conduct over-the-air measurement of radio classification performance in the lab using software radios, and we compare performance and training strategies for both. Finally, we conclude with a discussion of remaining problems, and design considerations for using such techniques.

Journal ArticleDOI
TL;DR: The present update extends on the evidence that CVD risk in the whole spectrum of IJD is increased, which underscores the need for CVDrisk management in these patients.
Abstract: Patients with rheumatoid arthritis (RA) and other inflammatory joint disorders (IJD) have increased cardiovascular disease (CVD) risk compared with the general population. In 2009, the European League Against Rheumatism (EULAR) taskforce recommended screening, identification of CVD risk factors and CVD risk management largely based on expert opinion. In view of substantial new evidence, an update was conducted with the aim of producing CVD risk management recommendations for patients with IJD that now incorporates an increasing evidence base. A multidisciplinary steering committee (representing 13 European countries) comprised 26 members including patient representatives, rheumatologists, cardiologists, internists, epidemiologists, a health professional and fellows. Systematic literature searches were performed and evidence was categorised according to standard guidelines. The evidence was discussed and summarised by the experts in the course of a consensus finding and voting process. Three overarching principles were defined. First, there is a higher risk for CVD in patients with RA, and this may also apply to ankylosing spondylitis and psoriatic arthritis. Second, the rheumatologist is responsible for CVD risk management in patients with IJD. Third, the use of non-steroidal anti-inflammatory drugs and corticosteroids should be in accordance with treatment-specific recommendations from EULAR and Assessment of Spondyloarthritis International Society. Ten recommendations were defined, of which one is new and six were changed compared with the 2009 recommendations. Each designated an appropriate evidence support level. The present update extends on the evidence that CVD risk in the whole spectrum of IJD is increased. This underscores the need for CVD risk management in these patients. These recommendations are defined to provide assistance in CVD risk management in IJD, based on expert opinion and scientific evidence.

Journal ArticleDOI
TL;DR: The consensus that humans are causing recent global warming is shared by 90% to 100% of publishing climate scientists according to six independent studies by co-authors of this paper as discussed by the authors.
Abstract: The consensus that humans are causing recent global warming is shared by 90%–100% of publishing climate scientists according to six independent studies by co-authors of this paper. Those results are consistent with the 97% consensus reported by Cook et al (Environ. Res. Lett. 8 024024) based on 11 944 abstracts of research papers, of which 4014 took a position on the cause of recent global warming. A survey of authors of those papers (N = 2412 papers) also supported a 97% consensus. Tol (2016 Environ. Res. Lett. 11 048001) comes to a different conclusion using results from surveys of non-experts such as economic geologists and a self-selected group of those who reject the consensus. We demonstrate that this outcome is not unexpected because the level of consensus correlates with expertise in climate science. At one point, Tol also reduces the apparent consensus by assuming that abstracts that do not explicitly state the cause of global warming ('no position') represent non-endorsement, an approach that if applied elsewhere would reject consensus on well-established theories such as plate tectonics. We examine the available studies and conclude that the finding of 97% consensus in published climate research is robust and consistent with other surveys of climate scientists and peer-reviewed studies.

Journal ArticleDOI
TL;DR: The notion of three-dimensional topological insulators is extended to systems that host no gapless surface states but exhibit topologically protected gapless hinge states and it is shown that SnTe as well as surface-modified Bi2TeI, BiSe, and BiTe are helical higher-order topology insulators.
Abstract: Three-dimensional topological (crystalline) insulators are materials with an insulating bulk, but conducting surface states which are topologically protected by time-reversal (or spatial) symmetries. Here, we extend the notion of three-dimensional topological insulators to systems that host no gapless surface states, but exhibit topologically protected gapless hinge states. Their topological character is protected by spatio-temporal symmetries, of which we present two cases: (1) Chiral higher-order topological insulators protected by the combination of time-reversal and a four-fold rotation symmetry. Their hinge states are chiral modes and the bulk topology is $\mathbb{Z}_2$-classified. (2) Helical higher-order topological insulators protected by time-reversal and mirror symmetries. Their hinge states come in Kramers pairs and the bulk topology is $\mathbb{Z}$-classified. We provide the topological invariants for both cases. Furthermore we show that SnTe as well as surface-modified Bi$_2$TeI, BiSe, and BiTe are helical higher-order topological insulators and propose a realistic experimental setup to detect the hinge states.

01 Jan 2015
TL;DR: The authors argue that the interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem-solving skills, adaptability, and creativity.
Abstract: In this essay, I begin by identifying the reasons that automation has not wiped out a majority of jobs over the decades and centuries. Automation does indeed substitute for labor�as it is typically intended to do. However, automation also complements labor, raises output in ways that leads to higher demand for labor, and interacts with adjustments in labor supply. Journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation and labor that increase productivity, raise earnings, and augment demand for labor. Changes in technology do alter the types of jobs available and what those jobs pay. In the last few decades, one noticeable change has been a "polarization" of the labor market, in which wage gains went disproportionately to those at the top and at the bottom of the income and skill distribution, not to those in the middle; however, I also argue, this polarization is unlikely to continue very far into future. The final section of this paper reflects on how recent and future advances in artificial intelligence and robotics should shape our thinking about the likely trajectory of occupational change and employment growth. I argue that the interplay between machine and human comparative advantage allows computers to substitute for workers in performing routine, codifiable tasks while amplifying the comparative advantage of workers in supplying problem-solving skills, adaptability, and creativity.

Journal ArticleDOI
TL;DR: In this article, the effect of meteorological variability on ozone trends was investigated using a multiple linear regression model and the residual of this regression showed increasing ozone trends of 1-3 ppbv a−1 in megacity clusters of eastern China that they attributed to changes in anthropogenic emissions.
Abstract: Observations of surface ozone available from ∼1,000 sites across China for the past 5 years (2013–2017) show severe summertime pollution and regionally variable trends. We resolve the effect of meteorological variability on the ozone trends by using a multiple linear regression model. The residual of this regression shows increasing ozone trends of 1–3 ppbv a−1 in megacity clusters of eastern China that we attribute to changes in anthropogenic emissions. By contrast, ozone decreased in some areas of southern China. Anthropogenic NOx emissions in China are estimated to have decreased by 21% during 2013–2017, whereas volatile organic compounds (VOCs) emissions changed little. Decreasing NOx would increase ozone under the VOC-limited conditions thought to prevail in urban China while decreasing ozone under rural NOx-limited conditions. However, simulations with the Goddard Earth Observing System Chemical Transport Model (GEOS-Chem) indicate that a more important factor for ozone trends in the North China Plain is the ∼40% decrease of fine particulate matter (PM2.5) over the 2013–2017 period, slowing down the aerosol sink of hydroperoxy (HO2) radicals and thus stimulating ozone production.

Journal ArticleDOI
TL;DR: The current evolution of the epidemiologic characteristics of lung cancer and its relative risk factors are reviewed to explore new ways of diagnosis and treatment.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a deep convolutional neural network architecture for environmental sound classification and used audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture.
Abstract: The ability of deep convolutional neural networks (CNN) to learn discriminative spectro-temporal patterns makes them well suited to environmental sound classification. However, the relative scarcity of labeled data has impeded the exploitation of this family of high-capacity models. This study has two primary contributions: first, we propose a deep convolutional neural network architecture for environmental sound classification. Second, we propose the use of audio data augmentation for overcoming the problem of data scarcity and explore the influence of different augmentations on the performance of the proposed CNN architecture. Combined with data augmentation, the proposed model produces state-of-the-art results for environmental sound classification. We show that the improved performance stems from the combination of a deep, high-capacity model and an augmented training set: this combination outperforms both the proposed CNN without augmentation and a "shallow" dictionary learning model with augmentation. Finally, we examine the influence of each augmentation on the model's classification accuracy for each class, and observe that the accuracy for each class is influenced differently by each augmentation, suggesting that the performance of the model could be improved further by applying class-conditional data augmentation.