scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
29 Jan 2016-eLife
TL;DR: This work employs a combination of phosphoproteomics, genetics, and pharmacology to unambiguously identify a subset of Rab GTPases as key LRRK2 substrates and a novel regulatory mechanism of Rabs that connects them to PD.
Abstract: Parkinson’s disease is a degenerative disorder of the nervous system that affects approximately 1% of the elderly population. Mutations in the gene that encodes an enzyme known as LRRK2 are the most common causes of the inherited form of the disease. Such mutations generally increase the activity of LRRK2 and so drug companies have developed drugs that inhibit LRRK2 to prevent or delay the progression of Parkinson’s disease. However, it was not known what role LRRK2 plays in cells, and why its over-activation is harmful. Steger et al. used a 'proteomics' approach to find other proteins that are regulated by LRRK2. The experiments tested a set of newly developed LRRK2 inhibitors in cells and brain tissue from mice. The mice had mutations in the gene encoding LRRK2 that are often found in human patients with Parkinson’s disease. The experiments show that LRRK2 targets some proteins belonging to the Rab GTPase family, which are involved in transporting molecules and other 'cargoes' around cells. Several Rab GTPases are less active in the mutant mice, which interferes with the ability of these proteins to correctly direct the movement of cargo around the cell. Steger et al.’s findings will help to advance the development of new therapies for Parkinson’s disease. The next challenges are to identify how altering the activity of Rab GTPases leads to degeneration of the nervous system and how LRRK2 inhibitors may slow down these processes.

745 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this paper, the authors propose a switching convolutional neural network that leverages variation of crowd density within an image to improve the accuracy and localization of the predicted crowd count, and provide interpretable representations of the multichotomy of space of crowd scene patches inferred from the switch.
Abstract: We propose a novel crowd counting model that maps a given crowd scene to its density. Crowd analysis is compounded by myriad of factors like inter-occlusion between people due to extreme crowding, high similarity of appearance between people and background elements, and large variability of camera view-points. Current state-of-the art approaches tackle these factors by using multi-scale CNN architectures, recurrent networks and late fusion of features from multi-column CNN with different receptive fields. We propose switching convolutional neural network that leverages variation of crowd density within an image to improve the accuracy and localization of the predicted crowd count. Patches from a grid within a crowd scene are relayed to independent CNN regressors based on crowd count prediction quality of the CNN established during training. The independent CNN regressors are designed to have different receptive fields and a switch classifier is trained to relay the crowd scene patch to the best CNN regressor. We perform extensive experiments on all major crowd counting datasets and evidence better performance compared to current state-of-the-art methods. We provide interpretable representations of the multichotomy of space of crowd scene patches inferred from the switch. It is observed that the switch relays an image patch to a particular CNN column based on density of crowd.

745 citations


Journal ArticleDOI
21 Aug 2018-JAMA
TL;DR: The USPSTF concludes with high certainty that the benefits of screening every 3 years with cytology alone in women aged 21 to 29 years substantially outweigh the harms and screening women younger than 21 years does not provide significant benefit.
Abstract: Importance The number of deaths from cervical cancer in the United States has decreased substantially since the implementation of widespread cervical cancer screening and has declined from 2.8 to 2.3 deaths per 100 000 women from 2000 to 2015. Objective To update the US Preventive Services Task Force (USPSTF) 2012 recommendation on screening for cervical cancer. Evidence Review The USPSTF reviewed the evidence on screening for cervical cancer, with a focus on clinical trials and cohort studies that evaluated screening with high-risk human papillomavirus (hrHPV) testing alone or hrHPV and cytology together (cotesting) compared with cervical cytology alone. The USPSTF also commissioned a decision analysis model to evaluate the age at which to begin and end screening, the optimal interval for screening, the effectiveness of different screening strategies, and related benefits and harms of different screening strategies. Findings Screening with cervical cytology alone, primary hrHPV testing alone, or cotesting can detect high-grade precancerous cervical lesions and cervical cancer. Screening women aged 21 to 65 years substantially reduces cervical cancer incidence and mortality. The harms of screening for cervical cancer in women aged 30 to 65 years are moderate. The USPSTF concludes with high certainty that the benefits of screening every 3 years with cytology alone in women aged 21 to 29 years substantially outweigh the harms. The USPSTF concludes with high certainty that the benefits of screening every 3 years with cytology alone, every 5 years with hrHPV testing alone, or every 5 years with both tests (cotesting) in women aged 30 to 65 years outweigh the harms. Screening women older than 65 years who have had adequate prior screening and women younger than 21 years does not provide significant benefit. Screening women who have had a hysterectomy with removal of the cervix for indications other than a high-grade precancerous lesion or cervical cancer provides no benefit. The USPSTF concludes with moderate to high certainty that screening women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer, screening women younger than 21 years, and screening women who have had a hysterectomy with removal of the cervix for indications other than a high-grade precancerous lesion or cervical cancer does not result in a positive net benefit. Conclusions and Recommendation The USPSTF recommends screening for cervical cancer every 3 years with cervical cytology alone in women aged 21 to 29 years. (A recommendation) The USPSTF recommends screening every 3 years with cervical cytology alone, every 5 years with hrHPV testing alone, or every 5 years with hrHPV testing in combination with cytology (cotesting) in women aged 30 to 65 years. (A recommendation) The USPSTF recommends against screening for cervical cancer in women younger than 21 years. (D recommendation) The USPSTF recommends against screening for cervical cancer in women older than 65 years who have had adequate prior screening and are not otherwise at high risk for cervical cancer. (D recommendation) The USPSTF recommends against screening for cervical cancer in women who have had a hysterectomy with removal of the cervix and do not have a history of a high-grade precancerous lesion or cervical cancer. (D recommendation)

745 citations


Journal ArticleDOI
TL;DR: Different QD-based imaging applications will be discussed from the technological and the biological point of view, ranging from super-resolution microscopy and single-particle tracking over in vitro cell and tissue imaging to in vivo investigations.
Abstract: Semiconductor quantum dots (QDs) have become important fluorescent probes for in vitro and in vivo bioimaging research. Their nanoparticle surfaces for versatile bioconjugation, their adaptable photophysical properties for multiplexed detection, and their superior stability for longer investigation times are the main advantages of QDs compared to other fluorescence imaging agents. Here, we review the recent literature dealing with the design and application of QD-bioconjugates for advanced in vitro and in vivo imaging. After a short summary of QD preparation and their most important properties, different QD-based imaging applications will be discussed from the technological and the biological point of view, ranging from super-resolution microscopy and single-particle tracking over in vitro cell and tissue imaging to in vivo investigations. A substantial part of the review will focus on multifunctional applications, in which the QD fluorescence is combined with drug or gene delivery towards theranostic approaches or with complementary technologies for multimodal imaging. We also briefly discuss QD toxicity issues and give a short outlook on future directions of QD-based bioimaging.

745 citations


Proceedings Article
29 Apr 2018
TL;DR: In this paper, a multi-agent actor-critic method called counterfactual multiagent (COMA) policy gradients is proposed, which uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies.
Abstract: Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. There is a great need for new reinforcement learning methods that can efficiently learn decentralised policies for such systems. To this end, we propose a new multi-agent actor-critic method called counterfactual multi-agent (COMA) policy gradients. COMA uses a centralised critic to estimate the Q-function and decentralised actors to optimise the agents' policies. In addition, to address the challenges of multi-agent credit assignment, it uses a counterfactual baseline that marginalises out a single agent's action, while keeping the other agents' actions fixed. COMA also uses a critic representation that allows the counterfactual baseline to be computed efficiently in a single forward pass. We evaluate COMA in the testbed of StarCraft unit micromanagement, using a decentralised variant with significant partial observability. COMA significantly improves average performance over other multi-agent actor-critic methods in this setting, and the best performing agents are competitive with state-of-the-art centralised controllers that get access to the full state.

745 citations


Journal ArticleDOI
TL;DR: The most recent data were obtained from the Centers for Disease Control and Prevention, the Agency for Healthcare Research and Quality, and the National Cancer Institute to estimate the burden and cost of GI and liver disease in the United States.

745 citations


Journal ArticleDOI
09 Dec 2016-Science
TL;DR: The surprising chemistry of so-called frustrated Lewis pairs (FLPs), which cannot form their natural complex together are reviewed, as well as efforts to extend this concept to asymmetric hydrogenations and its application to various chemical systems.
Abstract: BACKGROUND Since the work of Sabatier 100 year ago, chemists have turned almost exclusively to metals to activate H 2 by weakening or cleaving its central bond. This paradigm changed with a 2006 report of a metal-free molecule that reversibly activated H 2 across sterically encumbered Lewis acidic boron and Lewis basic phosphorus sites. Shortly thereafter, similar reactions were mediated by systems described as “frustrated Lewis pairs” (FLPs) that were derived from simple combinations of electron donors and acceptors in which steric demands precluded dative bond formation. The variety of such systems has since been expanded to include a wide range of donors and acceptors. Moreover, FLP reactivity has been shown to result when equilibria governing the formation of Lewis acid-base adducts provides access to free acid and base. Mechanistic studies have demonstrated that the FLP activation of H 2 proceeds via a mechanism directly analogous to the Piers mechanism for borane-mediated hydrosilylation of ketones, first described in 1996. Nonetheless, the discovery of these metal-free reactions of H 2 has prompted considerable interest in this concept and its application to various chemical systems. ADVANCES The application of FLP reactivity with H 2 to metal-free hydrogenation catalysis rapidly led to reductions of polar substrates. Over the past decade, the range of reducible substrates has been expanded to a variety of unsaturated compounds, including imines, enamines, olefins, polyaromatics, alkynes, ketones, and aldehydes. Efforts have also extended this technology to asymmetric hydrogenations, with a number of recent systems achieving high selectivity. Early on, it was recognized that the reactivity of FLPs was not limited to H 2 . FLPs have shown the capacity to capture and react with a variety of small molecules, including olefins, alkynes, CO 2 , SO 2 , NO, CO, N 2 O, and N -sulfinyltolyllamines ( p -tol)NSO ( p -tol, para -tolyl). This has led to metal-free strategies for CO and CO 2 reduction and SO generation and new avenues to transient, persistent, or stable radicals. FLP chemistry has been further extended to new strategies in synthetic organic chemistry, including FLP-mediated approaches to hydroamination, hydroboration, cyclization, and boration reactions. Because transition metals may also be acidic or basic, the reactivity of FLP systems in which one or both of the constituents are metal centers has been reported. Further, metal components can also be ancillary fragments for ligand-based FLP chemistry, or they can act as a scaffold, allowing the cooperative action of an FLP and a metal center on a substrate. In related developments, the notion of FLPs has been applied to the design of model systems for the active sites of the [Ni-Fe], [Fe-Fe], or [Fe] hydrogenase enzymes. Reaching beyond main-group and organic chemistry into polymers and materials chemistry, FLP catalysts have been used to prepare lactone-derived polymers, cyanamide oligomers, and Te-containing heterocycles for applications in photoactive materials. In addition, heterogeneous FLP hydrogenation catalysts have emerged, and the concept also provides a new mechanistic perspective on CO 2 reduction on the surface of indium oxide nanocrystals. OUTLOOK Applications of FLP chemistry to metal-free reductions, asymmetric hydrogenations, C–C bond formation, and C–H bond functionalization are continuing to evolve. Such advances offer strategies for reduced costs and the elimination of toxic contaminants that will undoubtedly garner interest from the synthetic chemistry communities in academia and industry. The expanding range of main-group and transition metal–based FLPs continues to demonstrate the generality of this concept and its broadening utility. However, the innovative synthetic strategies, reactivity, and new perspectives derived from the application of this simple concept to other areas of chemistry are perhaps the most exciting prospect.

745 citations


Journal ArticleDOI
TL;DR: In this paper, a hollow particle-based nitrogen-doped carbon nanofibers (HPCNFs-N) is proposed for supercapacitors, which is composed of interconnected carbon hollow nanoparticles.
Abstract: Carbon-based materials, as one of the most important electrode materials for supercapacitors, have attracted tremendous attention. At present, it is highly desirable but remains challenging to prepare one-dimensional carbon complex hollow nanomaterials for further improving the performance of supercapacitors. Herein, we report an effective strategy for the synthesis of hollow particle-based nitrogen-doped carbon nanofibers (HPCNFs-N). By embedding ultrafine zeolitic imidazolate framework (ZIF-8) nanoparticles into electrospun polyacrylonitrile (PAN), the as-prepared composite nanofibers are carbonized into hierarchical porous nanofibers composed of interconnected nitrogen-doped carbon hollow nanoparticles. Owing to its unique structural feature and the desirable chemical composition, the derived HPCNFs-N material exhibits much enhanced electrochemical properties as an electrode material for supercapacitors with remarkable specific capacitance at various current densities, high energy/power density and long cycling stability over 10 000 cycles.

745 citations


Journal ArticleDOI
TL;DR: IL‐6 and d‐D were closely related to the occurrence of severe CO VID‐19 in the adult patients, and their combined detection had the highest specificity and sensitivity for early prediction of the severity of COVID‐19 patients, which has important clinical value.
Abstract: The role of clinical laboratory data in the differential diagnosis of the severe forms of COVID-19 has not been definitely established. The aim of this study was to look for the warning index in severe COVID-19 patients. We investigated 43 adult patients with COVID-19. The patients were classified into mild group (28 patients) and severe group (15 patients). A comparison of the hematological parameters between the mild and severe groups showed significant differences in interleukin-6 (IL-6), d-dimer (d-D), glucose, thrombin time, fibrinogen, and C-reactive protein (P < .05). The optimal threshold and area under the receiver operator characteristic curve (ROC) of IL-6 were 24.3 and 0.795 µg/L, respectively, while those of d-D were 0.28 and 0.750 µg/L, respectively. The area under the ROC curve of IL-6 combined with d-D was 0.840. The specificity of predicting the severity of COVID-19 during IL-6 and d-D tandem testing was up to 93.3%, while the sensitivity of IL-6 and d-D by parallel test in the severe COVID-19 was 96.4%. IL-6 and d-D were closely related to the occurrence of severe COVID-19 in the adult patients, and their combined detection had the highest specificity and sensitivity for early prediction of the severity of COVID-19 patients, which has important clinical value.

745 citations


Posted Content
TL;DR: An empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and finding performance improvements across all tasks suggests a new probabilistic understanding of nonlinearities.
Abstract: We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU nonlinearity is the expected transformation of a stochastic regularizer which randomly applies the identity or zero map, combining the intuitions of dropout and zoneout while respecting neuron values. This connection suggests a new probabilistic understanding of nonlinearities. We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all tasks.

745 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a method to transform a set of stellar evolution tracks onto a uniform basis and then interpolate within that basis to construct stellar isochrones, accommodating a broad range of stellar types, from substellar objects to high-mass stars, and phases of evolution, from the pre-main sequence to the white dwarf cooling sequence.
Abstract: I describe a method to transform a set of stellar evolution tracks onto a uniform basis and then interpolate within that basis to construct stellar isochrones. The method accommodates a broad range of stellar types, from substellar objects to high-mass stars, and phases of evolution, from the pre-main sequence to the white dwarf cooling sequence. I discuss situations in which stellar physics leads to departures from the otherwise monotonic relation between initial stellar mass and lifetime and how these may be dealt with in isochrone construction. I close with convergence tests and recommendations for the number of points in the uniform basis and the mass between tracks in the original grid required in order to achieve a certain level of accuracy in the resulting isochrones. The programs that implement these methods are free and open-source; they may be obtained from the project webpage.

Journal ArticleDOI
TL;DR: Best available evidence about cerebral palsy–specific early intervention that should follow early diagnosis to optimize neuroplasticity and function is summarized.
Abstract: Importance Cerebral palsy describes the most common physical disability in childhood and occurs in 1 in 500 live births. Historically, the diagnosis has been made between age 12 and 24 months but now can be made before 6 months’ corrected age. Objectives To systematically review best available evidence for early, accurate diagnosis of cerebral palsy and to summarize best available evidence about cerebral palsy–specific early intervention that should follow early diagnosis to optimize neuroplasticity and function. Evidence Review This study systematically searched the literature about early diagnosis of cerebral palsy in MEDLINE (1956-2016), EMBASE (1980-2016), CINAHL (1983-2016), and the Cochrane Library (1988-2016) and by hand searching. Search terms included cerebral palsy , diagnosis , detection , prediction , identification , predictive validity , accuracy , sensitivity , and specificity . The study included systematic reviews with or without meta-analyses, criteria of diagnostic accuracy, and evidence-based clinical guidelines. Findings are reported according to the PRISMA statement, and recommendations are reported according to the Appraisal of Guidelines, Research and Evaluation (AGREE) II instrument. Findings Six systematic reviews and 2 evidence-based clinical guidelines met inclusion criteria. All included articles had high methodological Quality Assessment of Diagnostic Accuracy Studies (QUADAS) ratings. In infants, clinical signs and symptoms of cerebral palsy emerge and evolve before age 2 years; therefore, a combination of standardized tools should be used to predict risk in conjunction with clinical history. Before 5 months’ corrected age, the most predictive tools for detecting risk are term-age magnetic resonance imaging (86%-89% sensitivity), the Prechtl Qualitative Assessment of General Movements (98% sensitivity), and the Hammersmith Infant Neurological Examination (90% sensitivity). After 5 months’ corrected age, the most predictive tools for detecting risk are magnetic resonance imaging (86%-89% sensitivity) (where safe and feasible), the Hammersmith Infant Neurological Examination (90% sensitivity), and the Developmental Assessment of Young Children (83% C index). Topography and severity of cerebral palsy are more difficult to ascertain in infancy, and magnetic resonance imaging and the Hammersmith Infant Neurological Examination may be helpful in assisting clinical decisions. In high-income countries, 2 in 3 individuals with cerebral palsy will walk, 3 in 4 will talk, and 1 in 2 will have normal intelligence. Conclusions and Relevance Early diagnosis begins with a medical history and involves using neuroimaging, standardized neurological, and standardized motor assessments that indicate congruent abnormal findings indicative of cerebral palsy. Clinicians should understand the importance of prompt referral to diagnostic-specific early intervention to optimize infant motor and cognitive plasticity, prevent secondary complications, and enhance caregiver well-being.

Journal ArticleDOI
TL;DR: In this paper, the authors present a spatial analysis of 2013-2015 national drinking water PFAS concentrations from the U.S. Environmental Protection Agency's (US EPA) third Unregulated Contaminant Monitoring Rule (UCMR3) program.
Abstract: Drinking water contamination with poly- and perfluoroalkyl substances (PFASs) poses risks to the developmental, immune, metabolic, and endocrine health of consumers. We present a spatial analysis of 2013–2015 national drinking water PFAS concentrations from the U.S. Environmental Protection Agency’s (US EPA) third Unregulated Contaminant Monitoring Rule (UCMR3) program. The number of industrial sites that manufacture or use these compounds, the number of military fire training areas, and the number of wastewater treatment plants are all significant predictors of PFAS detection frequencies and concentrations in public water supplies. Among samples with detectable PFAS levels, each additional military site within a watershed’s eight-digit hydrologic unit is associated with a 20% increase in PFHxS, a 10% increase in both PFHpA and PFOA, and a 35% increase in PFOS. The number of civilian airports with personnel trained in the use of aqueous film-forming foams is significantly associated with the detection of ...

Proceedings ArticleDOI
01 Jan 2016
TL;DR: MDAnalysis is a library for structural and temporal analysis of molecular dynamics simulation trajectories and individual protein structures that enables users to rapidly write code that is portable and immediately usable in virtually all biomolecular simulation communities.
Abstract: MDAnalysis (http://mdanalysis.org) is a library for structural and temporal analysis of molecular dynamics (MD) simulation trajectories and individual protein structures. MD simulations of biological molecules have become an important tool to elucidate the relationship between molecular structure and physiological function. Simulations are performed with highly optimized software packages on HPC resources but most codes generate output trajectories in their own formats so that the development of new trajectory analysis algorithms is confined to specific user communities and widespread adoption and further development is delayed. MDAnalysis addresses this problem by abstracting access to the raw simulation data and presenting a uniform object-oriented Python interface to the user. It thus enables users to rapidly write code that is portable and immediately usable in virtually all biomolecular simulation communities. The user interface and modular design work equally well in complex scripted work flows, as foundations for other packages, and for interactive and rapid prototyping work in IPython / Jupyter notebooks, especially together with molecular visualization provided by nglview and time series analysis with pandas. MDAnalysis is written in Python and Cython and uses NumPy arrays for easy interoperability with the wider scientific Python ecosystem. It is widely used and forms the foundation for more specialized biomolecular simulation tools. MDAnalysis is available under the GNU General Public License v2.

Posted Content
Yujun Lin1, Song Han2, Huizi Mao2, Yu Wang1, William J. Dally2 
TL;DR: Deep Gradient Compression (DGC) as mentioned in this paper employs momentum correction, local gradient clipping, momentum factor masking, and warm-up training to preserve accuracy during compression, and achieves a gradient compression ratio from 270x to 600x without losing accuracy.
Abstract: Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD is redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile.

Posted Content
TL;DR: It is demonstrated how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images.
Abstract: Designed to learn long-range interactions on sequential data, transformers continue to show state-of-the-art results on a wide variety of tasks. In contrast to CNNs, they contain no inductive bias that prioritizes local interactions. This makes them expressive, but also computationally infeasible for long sequences, such as high-resolution images. We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images. We show how to (i) use CNNs to learn a context-rich vocabulary of image constituents, and in turn (ii) utilize transformers to efficiently model their composition within high-resolution images. Our approach is readily applied to conditional synthesis tasks, where both non-spatial information, such as object classes, and spatial information, such as segmentations, can control the generated image. In particular, we present the first results on semantically-guided synthesis of megapixel images with transformers and obtain the state of the art among autoregressive models on class-conditional ImageNet. Code and pretrained models can be found at this https URL .

Journal ArticleDOI
TL;DR: The strategies and perspectives summarized in this review aim to provide practical guidance for an increasing number of researchers to explore next-generation and high-performance PIBs, and the methodology may also be applicable to developing other energy storage systems.
Abstract: Potassium-ion batteries (PIBs) have attracted tremendous attention due to their low cost, fast ionic conductivity in electrolyte, and high operating voltage. Research on PIBs is still in its infancy, however, and achieving a general understanding of the drawbacks of each component and proposing research strategies for overcoming these problems are crucial for the exploration of suitable electrode materials/electrolytes and the establishment of electrode/cell assembly technologies for further development of PIBs. In this review, we summarize our current understanding in this field, classify and highlight the design strategies for addressing the key issues in the research on PIBs, and propose possible pathways for the future development of PIBs toward practical applications. The strategies and perspectives summarized in this review aim to provide practical guidance for an increasing number of researchers to explore next-generation and high-performance PIBs, and the methodology may also be applicable to developing other energy storage systems.

Posted ContentDOI
14 Jul 2015-bioRxiv
TL;DR: Based on the global CTF determination, the local defocus for each particle and for single frames of movies is accurately refined, which improves CTF parameters of all particles for subsequent image processing.
Abstract: Accurate estimation of the contrast transfer function (CTF) is critical for a near-atomic resolution cryo electron microscopy (cryoEM) reconstruction. Here, I present a GPU-accelerated computer program, Gctf, for accurate and robust, real-time CTF determination. Similar to alternative programs, the main target of Gctf is to maximize the cross-correlation of a simulated CTF with the power spectra of observed micrographs after background reduction. However, novel approaches in Gctf improve both speed and accuracy. In addition to GPU acceleration, a fast ?1-dimensional search plus 2-dimensional refinement (1S2R)? procedure significantly speeds up Gctf. Based on the global CTF determination, the local defocus for each particle and for single frames of movies is accurately refined, which improves CTF parameters of all particles for subsequent image processing. Novel diagnosis method using equiphase averaging(EFA) and self-consistency verification procedures have also been implemented in the program for practical use, especially for aims of near-atomic reconstruction. Gctf is an independent program and the outputs can be easily imported into other cryoEM software such as Relion and Frealign. The results from several representative datasets are shown and discussed in this paper.

Proceedings ArticleDOI
TL;DR: This work proposes model cards, a framework that can be used to document any trained machine learning model in the application fields of computer vision and natural language processing, and provides cards for two supervised models: One trained to detect smiling faces in images, and one training to detect toxic comments in text.
Abstract: Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related AI technology, increasing transparency into how well AI technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation.

Book ChapterDOI
Matej Kristan1, Ales Leonardis2, Jiří Matas3, Michael Felsberg4, Roman Pflugfelder5, Luka Cehovin1, Tomas Vojir3, Gustav Häger4, Alan Lukežič1, Gustavo Fernandez5, Abhinav Gupta6, Alfredo Petrosino7, Alireza Memarmoghadam8, Alvaro Garcia-Martin9, Andres Solis Montero10, Andrea Vedaldi11, Andreas Robinson4, Andy J. Ma12, Anton Varfolomieiev13, A. Aydin Alatan14, Aykut Erdem15, Bernard Ghanem16, Bin Liu, Bohyung Han17, Brais Martinez18, Chang-Ming Chang19, Changsheng Xu20, Chong Sun21, Daijin Kim17, Dapeng Chen22, Dawei Du20, Deepak Mishra23, Dit-Yan Yeung24, Erhan Gundogdu25, Erkut Erdem15, Fahad Shahbaz Khan4, Fatih Porikli26, Fatih Porikli27, Fei Zhao20, Filiz Bunyak28, Francesco Battistone7, Gao Zhu27, Giorgio Roffo29, Gorthi R. K. Sai Subrahmanyam23, Guilherme Sousa Bastos30, Guna Seetharaman31, Henry Medeiros32, Hongdong Li27, Honggang Qi20, Horst Bischof33, Horst Possegger33, Huchuan Lu21, Hyemin Lee17, Hyeonseob Nam34, Hyung Jin Chang35, Isabela Drummond30, Jack Valmadre11, Jae-chan Jeong36, Jaeil Cho36, Jae-Yeong Lee36, Jianke Zhu37, Jiayi Feng20, Jin Gao20, Jin-Young Choi, Jingjing Xiao2, Ji-Wan Kim36, Jiyeoup Jeong, João F. Henriques11, Jochen Lang10, Jongwon Choi, José M. Martínez9, Junliang Xing20, Junyu Gao20, Kannappan Palaniappan28, Karel Lebeda38, Ke Gao28, Krystian Mikolajczyk35, Lei Qin20, Lijun Wang21, Longyin Wen19, Luca Bertinetto11, Madan Kumar Rapuru23, Mahdieh Poostchi28, Mario Edoardo Maresca7, Martin Danelljan4, Matthias Mueller16, Mengdan Zhang20, Michael Arens, Michel Valstar18, Ming Tang20, Mooyeol Baek17, Muhammad Haris Khan18, Naiyan Wang24, Nana Fan39, Noor M. Al-Shakarji28, Ondrej Miksik11, Osman Akin15, Payman Moallem8, Pedro Senna30, Philip H. S. Torr11, Pong C. Yuen12, Qingming Huang39, Qingming Huang20, Rafael Martin-Nieto9, Rengarajan Pelapur28, Richard Bowden38, Robert Laganiere10, Rustam Stolkin2, Ryan Walsh32, Sebastian B. Krah, Shengkun Li19, Shengping Zhang39, Shizeng Yao28, Simon Hadfield38, Simone Melzi29, Siwei Lyu19, Siyi Li24, Stefan Becker, Stuart Golodetz11, Sumithra Kakanuru23, Sunglok Choi36, Tao Hu20, Thomas Mauthner33, Tianzhu Zhang20, Tony P. Pridmore18, Vincenzo Santopietro7, Weiming Hu20, Wenbo Li40, Wolfgang Hübner, Xiangyuan Lan12, Xiaomeng Wang18, Xin Li39, Yang Li37, Yiannis Demiris35, Yifan Wang21, Yuankai Qi39, Zejian Yuan22, Zexiong Cai12, Zhan Xu37, Zhenyu He39, Zhizhen Chi21 
08 Oct 2016
TL;DR: The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment.
Abstract: The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).

Book ChapterDOI
TL;DR: In this article, a review of various global sensitivity analysis methods of model output is presented, in a complete methodological framework, in which three kinds of methods are distinguished: the screening (coarse sorting of the most influential inputs among a large number), the measures of importance (quantitative sensitivity indices) and the deep exploration of the model behaviour (measuring the effects of inputs on their all variation range).
Abstract: This chapter makes a review, in a complete methodological framework, of various global sensitivity analysis methods of model output. Numerous statistical and probabilistic tools (regression, smoothing, tests, statistical learning, Monte Carlo, …) aim at determining the model input variables which mostly contribute to an interest quantity depending on model output. This quantity can be for instance the variance of an output variable. Three kinds of methods are distinguished: the screening (coarse sorting of the most influential inputs among a large number), the measures of importance (quantitative sensitivity indices) and the deep exploration of the model behaviour (measuring the effects of inputs on their all variation range). A progressive application methodology is illustrated on a scholar application. A synthesis is given to place every method according to several axes, mainly the cost in number of model evaluations, the model complexity and the nature of brought information.

Journal ArticleDOI
TL;DR: It is proved that any surrogate loss function can be used for classification with noisy labels by using importance reweighting, with consistency assurance that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample.
Abstract: In this paper, we study a classification problem in which sample labels are randomly corrupted. In this scenario, there is an unobservable sample with noise-free labels. However, before being observed, the true labels are independently flipped with a probability $\rho \in [0,0.5)$ , and the random label noise can be class-conditional. Here, we address two fundamental problems raised by this scenario. The first is how to best use the abundant surrogate loss functions designed for the traditional classification problem when there is label noise. We prove that any surrogate loss function can be used for classification with noisy labels by using importance reweighting, with consistency assurance that the label noise does not ultimately hinder the search for the optimal classifier of the noise-free sample. The other is the open problem of how to obtain the noise rate $\rho$ . We show that the rate is upper bounded by the conditional probability $P(\hat{Y}|X)$ of the noisy sample. Consequently, the rate can be estimated, because the upper bound can be easily reached in classification problems. Experimental results on synthetic and real datasets confirm the efficiency of our methods.

Journal ArticleDOI
12 Nov 2020-Cell
TL;DR: A model in which viral attachment and infection involves heparan sulfate-dependent enhancement of binding to ACE2 is suggested, in which Manipulation of hepara sulfate or inhibition of viral adhesion by exogenous heparin presents new therapeutic opportunities.

Book
08 Dec 2020
TL;DR: In this paper, a perforated or woven and rolled insert of metal or tough synthetic material is used to provide a web-like engagement surface for a metal spindle fitting with some play in the socket of another member.
Abstract: Machine parts subject to relative vibration motion, in particular a metal spindle fitting with some play in the socket of another member as in the case of an ignition distributor arm joint, are provided, on at least one of the relatively moving surfaces, with closely spaced cavities in which wear products may accumulate, leaving a web-like engagement surface. The cavities do not permit displacement of materials in the direction of vibration, which has the advantage, if the engagement is greased, of holding the grease in the engagement area. Instead of providing the cavities by pits or grooves on one or both of the relatively vibrating surfaces, a perforated or woven and rolled insert of metal or tough synthetic material may be used.

Journal ArticleDOI
TL;DR: This paper will examine the historical context that gave rise to the increasing use of metaphors as inspiration and justification for the development of new methods, and discuss the reasons for the vulnerability of the metaheuristics field to this line of research.

01 Jan 2016
TL;DR: It is shown that Guided Grad-CAM helps untrained users successfully discern a "stronger" deep network from a "weaker" one even when both networks make identical predictions, and also exposes the somewhat surprising insight that common CNN + LSTM models can be good at localizing discriminative input image regions despite not being trained on grounded image-text pairs.
Abstract: We propose a technique for making Convolutional Neural Network (CNN)-based models more transparent by visualizing the regions of input that are "important" for predictions from these models - or visual explanations. Our approach, called Gradient-weighted Class Activation Mapping (Grad-CAM), uses the class-specific gradient information flowing into the final convolutional layer of a CNN to produce a coarse localization map of the important regions in the image. Grad-CAM is a strict generalization of the Class Activation Mapping. Unlike CAM, Grad-CAM requires no re-training and is broadly applicable to any CNN-based architectures. We also show how Grad-CAM may be combined with existing pixel-space visualizations to create a high-resolution class-discriminative visualization (Guided Grad-CAM). We generate Grad-CAM and Guided Grad-CAM visual explanations to better understand image classification, image captioning, and visual question answering (VQA) models. In the context of image classification models, our visualizations (a) lend insight into their failure modes showing that seemingly unreasonable predictions have reasonable explanations, and (b) outperform pixel-space gradient visualizations (Guided Backpropagation and Deconvolution) on the ILSVRC-15 weakly supervised localization task. For image captioning and VQA, our visualizations expose the somewhat surprising insight that common CNN + LSTM models can often be good at localizing discriminative input image regions despite not being trained on grounded image-text pairs. Finally, we design and conduct human studies to measure if Guided Grad-CAM explanations help users establish trust in the predictions made by deep networks. Interestingly, we show that Guided Grad-CAM helps untrained users successfully discern a "stronger" deep network from a "weaker" one even when both networks make identical predictions.

Journal ArticleDOI
TL;DR: It is the position of the Academy of Nutrition and Dietetics that appropriately planned vegetarian, including vegan, diets are healthful, nutritionally adequate, and may provide health benefits for the prevention and treatment of certain diseases.

Journal ArticleDOI
TL;DR: Here, it is found that for processes which are approximately cyclic, the second law for microscopic systems takes on a different form compared to the macroscopic scale, imposing not just one constraint on state transformations, but an entire family of constraints.
Abstract: The second law of thermodynamics places constraints on state transformations. It applies to systems composed of many particles, however, we are seeing that one can formulate laws of thermodynamics when only a small number of particles are interacting with a heat bath. Is there a second law of thermodynamics in this regime? Here, we find that for processes which are approximately cyclic, the second law for microscopic systems takes on a different form compared to the macroscopic scale, imposing not just one constraint on state transformations, but an entire family of constraints. We find a family of free energies which generalize the traditional one, and show that they can never increase. The ordinary second law relates to one of these, with the remainder imposing additional constraints on thermodynamic transitions. We find three regimes which determine which family of second laws govern state transitions, depending on how cyclic the process is. In one regime one can cause an apparent violation of the usual second law, through a process of embezzling work from a large system which remains arbitrarily close to its original state. These second laws are relevant for small systems, and also apply to individual macroscopic systems interacting via long-range interactions. By making precise the definition of thermal operations, the laws of thermodynamics are unified in this framework, with the first law defining the class of operations, the zeroth law emerging as an equivalence relation between thermal states, and the remaining laws being monotonicity of our generalized free energies.

Journal ArticleDOI
TL;DR: The delivery strategy is applied to a mouse model of human hereditary tyrosinemia and it is shown that the treatment generated fumarylacetoacetate hydrolase (Fah)-positive hepatocytes by correcting the causative Fah-splicing mutation and rescued disease symptoms such as weight loss and liver damage.
Abstract: The combination of Cas9, guide RNA and repair template DNA can induce precise gene editing and the correction of genetic diseases in adult mammals. However, clinical implementation of this technology requires safe and effective delivery of all of these components into the nuclei of the target tissue. Here, we combine lipid nanoparticle-mediated delivery of Cas9 mRNA with adeno-associated viruses encoding a sgRNA and a repair template to induce repair of a disease gene in adult animals. We applied our delivery strategy to a mouse model of human hereditary tyrosinemia and show that the treatment generated fumarylacetoacetate hydrolase (Fah)-positive hepatocytes by correcting the causative Fah-splicing mutation. Treatment rescued disease symptoms such as weight loss and liver damage. The efficiency of correction was >6% of hepatocytes after a single application, suggesting potential utility of Cas9-based therapeutic genome editing for a range of diseases.

Journal ArticleDOI
TL;DR: Future research on digital games would benefit from a systematic programme of experimental work, examining in detail which game features are most effective in promoting engagement and supporting learning.
Abstract: Continuing interest in digital games indicated that it would be useful to update Connolly et al.'s (2012) systematic literature review of empirical evidence about the positive impacts and outcomes of games. Since a large number of papers was identified in the period from 2009 to 2014, the current review focused on 143 papers that provided higher quality evidence about the positive outcomes of games. Connolly et al.'s multidimensional analysis of games and their outcomes provided a useful framework for organising the varied research in this area. The most frequently occurring outcome reported for games for learning was knowledge acquisition, while entertainment games addressed a broader range of affective, behaviour change, perceptual and cognitive and physiological outcomes. Games for learning were found across varied topics with STEM subjects and health the most popular. Future research on digital games would benefit from a systematic programme of experimental work, examining in detail which game features are most effective in promoting engagement and supporting learning. The current systematic literature review updates Author (date).The review looks at impacts and outcomes of playing digital games from 2009 to 2014.Multi-component coding of papers, games and learning outcomes was used.Many papers were found with 143 papers providing high quality evidence.Games for entertainment and learning addressed different outcomes.