scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: In this article, the authors proposed an alternative estimator that is free of contamination, and illustrate the relative shortcomings of two-way fixed effects regressions with leads and lags through an empirical application.

570 citations


Journal ArticleDOI
TL;DR: Important aspects of antibiotic pollution in fresh waters are highlighted: that concentrations of antibiotics in the environment are substantial, that micro-organisms are susceptible to this, that bacteria can evolve resistance in the environments, and that antibiotic pollution affects natural food webs while interacting with other stressors; which taken together poses a number of challenges for environmental scientists.

570 citations


Book
05 Jun 2015
TL;DR: This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory, showing how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks.
Abstract: This monograph presents a unified framework for energy efficiency maximization in wireless networks via fractional programming theory. The definition of energy efficiency is introduced, with reference to single-user and multi-user wireless networks, and it is observed how the problem of resource allocation for energy efficiency optimization is naturally cast as a fractional program. An extensive review of the state-of-the-art in energy efficiency optimization by fractional programming is provided, with reference to centralized and distributed resource allocation schemes. A solid background on fractional programming theory is provided. The key-notion of generalized concavity is presented and its strong connection with fractional functions described. A taxonomy of fractional problems is introduced, and for each class of fractional problem, general solution algorithms are described, discussing their complexity and convergence properties. The described theoretical and algorithmic framework is applied to solve energy efficiency maximization problems in practical wireless networks. A general system and signal model is developed which encompasses many relevant special cases, such as one-hop and two-hop heterogeneous networks, multi-cell networks, small-cell networks, device-to-device systems, cognitive radio systems, and hardware-impaired networks, wherein multiple-antennas and multiple subcarriers are possibly employed. Energy-efficient resource allocation algorithms are developed, considering both centralized, cooperative schemes, as well as distributed approaches for self-organizing networks. Finally, some remarks on future lines of research are given, stating some open problems that remain to be studied. It is shown how the described framework is general enough to be extended in these directions, proving useful in tackling future challenges that may arise in the design of energy-efficient future wireless networks.

570 citations


Book ChapterDOI
08 Sep 2018
TL;DR: This work presents TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild, which covers a wide selection of object classes in broad and diverse context and provides an extensive benchmark on TrackingNet by evaluating more than 20 trackers.
Abstract: Despite the numerous developments in object tracking, further improvement of current tracking algorithms is limited by small and mostly saturated datasets. As a matter of fact, data-hungry trackers based on deep-learning currently rely on object detection datasets due to the scarcity of dedicated large-scale tracking datasets. In this work, we present TrackingNet, the first large-scale dataset and benchmark for object tracking in the wild. We provide more than 30K videos with more than 14 million dense bounding box annotations. Our dataset covers a wide selection of object classes in broad and diverse context. By releasing such a large-scale dataset, we expect deep trackers to further improve and generalize. In addition, we introduce a new benchmark composed of 500 novel videos, modeled with a distribution similar to our training dataset. By sequestering the annotation of the test set and providing an online evaluation server, we provide a fair benchmark for future development of object trackers. Deep trackers fine-tuned on a fraction of our dataset improve their performance by up to 1.6% on OTB100 and up to 1.7% on TrackingNet Test. We provide an extensive benchmark on TrackingNet by evaluating more than 20 trackers. Our results suggest that object tracking in the wild is far from being solved.

570 citations


Journal ArticleDOI
TL;DR: The demographic, clinical, radiological and laboratory characteristics of six consecutive patients assessed between 1 and 16 April 2020 at the National Hospital for Neurology and Neurosurgery, Queen Square, London, UK with acute ischaemic stroke and COVID-19 are described.
Abstract: Coronavirus disease 2019 (COVID19), caused by severe acute respiratory syndrome coronavirus 2 (SARSCoV-2) infection, is associated with coagulopathy causing venous and arterial thrombosis. 2 Recent data from the pandemic epicentre in Wuhan, China, reported neurological complications in 36% of 214 patients with COVID-19; acute cerebrovascular disease (mainly ischaemic stroke) was more common among 88 patients with severe COVID-19 than those with nonsevere disease (5.7% vs 0.8%). However, the mechanisms, phenotype and optimal management of ischaemic stroke associated with COVID-19 remain uncertain. We describe the demographic, clinical, radiological and laboratory characteristics of six consecutive patients assessed between 1 and 16 April 2020 at the National Hospital for Neurology and Neurosurgery, Queen Square, London, UK, with acute ischaemic stroke and COVID-19 (confirmed by reversetranscriptase PCR (RTPCR)) (table 1). All six patients had large vessel occlusion with markedly elevated Ddimer levels (≥1000μg/L). Three patients had multiterritory infarcts, two had concurrent venous thrombosis, and, in two, ischaemic strokes occurred despite therapeutic anticoagulation.

570 citations


Journal ArticleDOI
29 Jul 2016-Science
TL;DR: It is shown that combinatorial, cumulative genome editing of a compact barcode can be used to record lineage information in multicellular systems and that rich, systematically generated maps of organismal development will advance the understanding of development in both healthy and disease states.
Abstract: Multicellular systems develop from single cells through distinct lineages. However, current lineage-tracing approaches scale poorly to whole, complex organisms. Here, we use genome editing to progressively introduce and accumulate diverse mutations in a DNA barcode over multiple rounds of cell division. The barcode, an array of clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 target sites, marks cells and enables the elucidation of lineage relationships via the patterns of mutations shared between cells. In cell culture and zebrafish, we show that rates and patterns of editing are tunable and that thousands of lineage-informative barcode alleles can be generated. By sampling hundreds of thousands of cells from individual zebrafish, we find that most cells in adult organs derive from relatively few embryonic progenitors. In future analyses, genome editing of synthetic target arrays for lineage tracing (GESTALT) can be used to generate large-scale maps of cell lineage in multicellular systems for normal development and disease.

570 citations


Posted Content
TL;DR: The Encoder-Recurrent-Decoder (ERD) model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers that extends previous Long Short Term Memory models in the literature to jointly learn representations and their dynamics.
Abstract: We propose the Encoder-Recurrent-Decoder (ERD) model for recognition and prediction of human body pose in videos and motion capture. The ERD model is a recurrent neural network that incorporates nonlinear encoder and decoder networks before and after recurrent layers. We test instantiations of ERD architectures in the tasks of motion capture (mocap) generation, body pose labeling and body pose forecasting in videos. Our model handles mocap training data across multiple subjects and activity domains, and synthesizes novel motions while avoid drifting for long periods of time. For human pose labeling, ERD outperforms a per frame body part detector by resolving left-right body part confusions. For video pose forecasting, ERD predicts body joint displacements across a temporal horizon of 400ms and outperforms a first order motion model based on optical flow. ERDs extend previous Long Short Term Memory (LSTM) models in the literature to jointly learn representations and their dynamics. Our experiments show such representation learning is crucial for both labeling and prediction in space-time. We find this is a distinguishing feature between the spatio-temporal visual domain in comparison to 1D text, speech or handwriting, where straightforward hard coded representations have shown excellent results when directly combined with recurrent units.

570 citations


Journal ArticleDOI
TL;DR: This review summarizes current studies on age-related impairment of Nrf2/EpRE function and discusses the changes in NRF2 regulatory mechanisms with aging.

570 citations


Posted Content
TL;DR: An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed.
Abstract: Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

570 citations


Posted Content
TL;DR: A white-box, interpretable, mathematical model for safety assurance, which the authors call-Sensitive Safety (RSS), and a design of a system that adheres to the safety assurance requirements and is scalable to millions of cars.
Abstract: In recent years, car makers and tech companies have been racing towards self driving cars. It seems that the main parameter in this race is who will have the first car on the road. The goal of this paper is to add to the equation two additional crucial parameters. The first is standardization of safety assurance --- what are the minimal requirements that every self-driving car must satisfy, and how can we verify these requirements. The second parameter is scalability --- engineering solutions that lead to unleashed costs will not scale to millions of cars, which will push interest in this field into a niche academic corner, and drive the entire field into a "winter of autonomous driving". In the first part of the paper we propose a white-box, interpretable, mathematical model for safety assurance, which we call Responsibility-Sensitive Safety (RSS). In the second part we describe a design of a system that adheres to our safety assurance requirements and is scalable to millions of cars.

570 citations


Journal ArticleDOI
TL;DR: This work focuses on how PI3K-dependent activation of Akt and spatial regulation of the tuberous sclerosis complex (TSC) complex ( TSC complex) switches on Rheb at the lysosome, where mTORC1 is activated.

Journal ArticleDOI
TL;DR: Antigen escape and downregulation have emerged as major issues impacting the durability of CAR T-cell therapy and ways to overcome these obstacles in order to improve clinical outcomes are explored.
Abstract: Emerging data from chimeric antigen receptor (CAR) T-cell trials in B-cell malignancies demonstrate that a common mechanism of resistance to this novel class of therapeutics is the emergence of tumors with loss or downregulation of the target antigen. Antigen loss or antigen-low escape is likely to emerge as an even greater barrier to success in solid tumors, which manifest greater heterogeneity in target antigen expression. Potential approaches to overcome this challenge include engineering CAR T cells to achieve multispecificity and to respond to lower levels of target antigen and more efficient induction of natural antitumor immune responses as a result of CAR-induced inflammation. In this article, we review the evidence to date for antigen escape and downregulation and discuss approaches currently under study to overcome these obstacles.Significance: Antigen escape and downregulation have emerged as major issues impacting the durability of CAR T-cell therapy. Here, we explore their incidence and ways to overcome these obstacles in order to improve clinical outcomes. Cancer Discov; 8(10); 1219-26. ©2018 AACR.

Journal ArticleDOI
TL;DR: Plastics should remain in the top of the political agenda in Europe and across the world, not only to minimise plastic leakage and pollution, but to promote sustainable growth and to stimulate both green and blue- economies.

Journal ArticleDOI
TL;DR: A photovoltaic-electrolysis system with the highest STH efficiency for any water splitting technology to date, to the best of the knowledge, is reported.
Abstract: Hydrogen production via electrochemical water splitting is a promising approach for storing solar energy For this technology to be economically competitive, it is critical to develop water splitting systems with high solar-to-hydrogen (STH) efficiencies Here we report a photovoltaic-electrolysis system with the highest STH efficiency for any water splitting technology to date, to the best of our knowledge Our system consists of two polymer electrolyte membrane electrolysers in series with one InGaP/GaAs/GaInNAsSb triple-junction solar cell, which produces a large-enough voltage to drive both electrolysers with no additional energy input The solar concentration is adjusted such that the maximum power point of the photovoltaic is well matched to the operating capacity of the electrolysers to optimize the system efficiency The system achieves a 48-h average STH efficiency of 30% These results demonstrate the potential of photovoltaic-electrolysis systems for cost-effective solar energy storage

Journal ArticleDOI
06 Mar 2017-BMJ
TL;DR: The StaRI Checklist prompts researchers to describe both the implementation strategy (techniques used to promote implementation of an underused evidence-based intervention and the effectiveness of the intervention that was being implemented.
Abstract: Implementation studies are often poorly reported and indexed, reducing their potential to inform initiatives to improve healthcare services. The Standards for Reporting Implementation Studies (StaRI) initiative aimed to develop guidelines for transparent and accurate reporting of implementation studies. Informed by the findings of a systematic review and a consensus-building e-Delphi exercise, an international working group of implementation science experts discussed and agreed the StaRI Checklist comprising 27 items. It prompts researchers to describe both the implementation strategy (techniques used to promote implementation of an underused evidence-based intervention) and the effectiveness of the intervention that was being implemented. An accompanying Explanation and Elaboration document (published in BMJ Open, doi:10.1136/bmjopen-2016-013318) details each of the items, explains the rationale, and provides examples of good reporting practice. Adoption of StaRI will improve the reporting of implementation studies, potentially facilitating translation of research into practice and improving the health of individuals and populations.

Journal Article
TL;DR: An efficient deep learning approach is developed that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems, and unifies concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks, which leads to size-extensive and uniformly accurate chemical space predictions.
Abstract: Learning from data has led to paradigm shifts in a multitude of disciplines, including web, text and image search, speech recognition, as well as bioinformatics. Can machine learning enable similar breakthroughs in understanding quantum many-body systems? Here we develop an efficient deep learning approach that enables spatially and chemically resolved insights into quantum-mechanical observables of molecular systems. We unify concepts from many-body Hamiltonians with purpose-designed deep tensor neural networks, which leads to size-extensive and uniformly accurate (1 kcal mol−1) predictions in compositional and configurational chemical space for molecules of intermediate size. As an example of chemical relevance, the model reveals a classification of aromatic rings with respect to their stability. Further applications of our model for predicting atomic energies and local chemical potentials in molecules, reliable isomer energies, and molecules with peculiar electronic structure demonstrate the potential of machine learning for revealing insights into complex quantum-chemical systems.

Journal ArticleDOI
TL;DR: The progress and current status of the transdermal drug delivery field is detailed, numerous pharmaceutical developments which have been employed to overcome limitations associated with skin delivery systems are described and particular attention is paid to the emerging field of microneedle technologies.
Abstract: The skin offers an accessible and convenient site for the administration of medications. To this end, the field of transdermal drug delivery, aimed at developing safe and efficacious means of delivering medications across the skin, has in the past and continues to garner much time and investment with the continuous advancement of new and innovative approaches. This review details the progress and current status of the transdermal drug delivery field and describes numerous pharmaceutical developments which have been employed to overcome limitations associated with skin delivery systems. Advantages and disadvantages of the various approaches are detailed, commercially marketed products are highlighted and particular attention is paid to the emerging field of microneedle technologies.

Journal ArticleDOI
TL;DR: Task-Oriented Flow (TOFlow) as mentioned in this paper is a self-supervised, task-specific representation for low-level video processing, which is trained in a supervised manner.
Abstract: Many video enhancement algorithms rely on optical flow to register frames in a video sequence. Precise flow estimation is however intractable; and optical flow itself is often a sub-optimal representation for particular video processing tasks. In this paper, we propose task-oriented flow (TOFlow), a motion representation learned in a self-supervised, task-specific manner. We design a neural network with a trainable motion estimation component and a video processing component, and train them jointly to learn the task-oriented flow. For evaluation, we build Vimeo-90K, a large-scale, high-quality video dataset for low-level video processing. TOFlow outperforms traditional optical flow on standard benchmarks as well as our Vimeo-90K dataset in three video processing tasks: frame interpolation, video denoising/deblocking, and video super-resolution.

Journal ArticleDOI
TL;DR: The physical distancing measures adopted by the UK public have substantially reduced contact levels and will likely lead to a substantial impact and a decline in cases in the coming weeks, but this projected decline in incidence will not occur immediately.
Abstract: To mitigate and slow the spread of COVID-19, many countries have adopted unprecedented physical distancing policies, including the UK. We evaluate whether these measures might be sufficient to control the epidemic by estimating their impact on the reproduction number (R0, the average number of secondary cases generated per case). We asked a representative sample of UK adults about their contact patterns on the previous day. The questionnaire was conducted online via email recruitment and documents the age and location of contacts and a measure of their intimacy (whether physical contact was made or not). In addition, we asked about adherence to different physical distancing measures. The first surveys were sent on Tuesday, 24 March, 1 day after a “lockdown” was implemented across the UK. We compared measured contact patterns during the “lockdown” to patterns of social contact made during a non-epidemic period. By comparing these, we estimated the change in reproduction number as a consequence of the physical distancing measures imposed. We used a meta-analysis of published estimates to inform our estimates of the reproduction number before interventions were put in place. We found a 74% reduction in the average daily number of contacts observed per participant (from 10.8 to 2.8). This would be sufficient to reduce R0 from 2.6 prior to lockdown to 0.62 (95% confidence interval [CI] 0.37–0.89) after the lockdown, based on all types of contact and 0.37 (95% CI = 0.22–0.53) for physical (skin to skin) contacts only. The physical distancing measures adopted by the UK public have substantially reduced contact levels and will likely lead to a substantial impact and a decline in cases in the coming weeks. However, this projected decline in incidence will not occur immediately as there are significant delays between infection, the onset of symptomatic disease, and hospitalisation, as well as further delays to these events being reported. Tracking behavioural change can give a more rapid assessment of the impact of physical distancing measures than routine epidemiological surveillance.

Journal ArticleDOI
TL;DR: This paper identifies the key dimensions of customer service voiced by hotel visitors use a data mining approach, latent dirichlet analysis (LDA), which uncovers 19 controllable dimensions that are key for hotels to manage their interactions with visitors.

Journal ArticleDOI
08 Aug 2017-JAMA
TL;DR: In this Viewpoint, the potential unintended consequences that may result from the application of ML-DSS in clinical practice are considered.
Abstract: Over the past decade, machine learning techniques have made substantial advances in many domains. In health care, global interest in the potential of machine learning has increased; for example, a deep learning algorithm has shown high accuracy in detecting diabetic retinopathy.1 There have been suggestions that machine learning will drive changes in health care within a few years, specifically in medical disciplines that require more accurate prognostic models (eg, oncology) and those based on pattern recognition (eg, radiology and pathology). However, comparative studies on the effectiveness of machine learning–based decision support systems (ML-DSS) in medicine are lacking, especially regarding the effects on health outcomes. Moreover, the introduction of new technologies in health care has not always been straightforward or without unintended and adverse effects.2 In this Viewpoint we consider the potential unintended consequences that may result from the application of ML-DSS in clinical practice.

Proceedings ArticleDOI
Pavlo Molchanov1, Arun Mallya1, Stephen Tyree1, Iuri Frosio1, Jan Kautz1 
15 Jun 2019
TL;DR: A novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores and two variations of this method using the first and second-order Taylor expansions to approximate a filter's contribution are described.
Abstract: Structural pruning of neural network parameters reduces computational, energy, and memory transfer costs during inference. We propose a novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores. We describe two variations of our method using the first and second-order Taylor expansions to approximate a filter's contribution. Both methods scale consistently across any network layer without requiring per-layer sensitivity analysis and can be applied to any kind of layer, including skip connections. For modern networks trained on ImageNet, we measured experimentally a high (>93%) correlation between the contribution computed by our methods and a reliable estimate of the true importance. Pruning with the proposed methods led to an improvement over state-of-the-art in terms of accuracy, FLOPs, and parameter reduction. On ResNet-101, we achieve a 40% FLOPS reduction by removing 30% of the parameters, with a loss of 0.02% in the top-1 accuracy on ImageNet.

Journal ArticleDOI
TL;DR: The findings argue that MSA is caused by a unique strain of α-synuclein prions, which is different from the putative prions causing PD and from those causing spontaneous neurodegeneration in TgM83+/+ mice.
Abstract: Prions are proteins that adopt alternative conformations that become self-propagating; the PrPSc prion causes the rare human disorder Creutzfeldt–Jakob disease (CJD). We report here that multiple system atrophy (MSA) is caused by a different human prion composed of the α-synuclein protein. MSA is a slowly evolving disorder characterized by progressive loss of autonomic nervous system function and often signs of parkinsonism; the neuropathological hallmark of MSA is glial cytoplasmic inclusions consisting of filaments of α-synuclein. To determine whether human α-synuclein forms prions, we examined 14 human brain homogenates for transmission to cultured human embryonic kidney (HEK) cells expressing full-length, mutant human α-synuclein fused to yellow fluorescent protein (α-syn140*A53T–YFP) and TgM83+/− mice expressing α-synuclein (A53T). The TgM83+/− mice that were hemizygous for the mutant transgene did not develop spontaneous illness; in contrast, the TgM83+/+ mice that were homozygous developed neurological dysfunction. Brain extracts from 14 MSA cases all transmitted neurodegeneration to TgM83+/− mice after incubation periods of ∼120 d, which was accompanied by deposition of α-synuclein within neuronal cell bodies and axons. All of the MSA extracts also induced aggregation of α-syn*A53T–YFP in cultured cells, whereas none of six Parkinson’s disease (PD) extracts or a control sample did so. Our findings argue that MSA is caused by a unique strain of α-synuclein prions, which is different from the putative prions causing PD and from those causing spontaneous neurodegeneration in TgM83+/+ mice. Remarkably, α-synuclein is the first new human prion to be identified, to our knowledge, since the discovery a half century ago that CJD was transmissible.

Journal ArticleDOI
TL;DR: In this article, the authors measured changes in commonly used optical properties and indices in DOM leached from peat soil, plants, and algae following biological and photochemical degradation to determine whether they provide unique signatures that can be linked to original DOM source.
Abstract: Advances in spectroscopic techniques have led to an increase in the use of optical properties (absorbance and fluorescence) to assess dissolved organic matter (DOM) composition and infer sources and processing. However, little information is available to assess the impact of biological and photolytic processing on the optical properties of original DOM source materials. Over a 3.5 month laboratory study, we measured changes in commonly used optical properties and indices in DOM leached from peat soil, plants, and algae following biological and photochemical degradation to determine whether they provide unique signatures that can be linked to original DOM source. Changes in individual optical parameters varied by source material and process, with biodegradation and photodegradation often causing values to shift in opposite directions. Although values for different source materials frequently overlapped, multivariate statistical analyses showed that unique optical signatures could be linked to original DOM source material, with 17 optical properties determined by discriminant analysis to be significant (p < 0.05) in distinguishing between DOM source and environmental processing. These results demonstrate that inferring source material from optical properties is possible when parameters are evaluated in combination even after extensive biological and photochemical alteration.

Journal ArticleDOI
23 Jan 2015-Science
TL;DR: The data provide a conclusive rationale for worldwide K13-propeller sequencing to identify and eliminate artemisinin-resistant parasites and imperils efforts to reduce the global malaria burden.
Abstract: The emergence of artemisinin resistance in Southeast Asia imperils efforts to reduce the global malaria burden. We genetically modified the Plasmodium falciparum K13 locus using zinc-finger nucleases and measured ring-stage survival rates after drug exposure in vitro; these rates correlate with parasite clearance half-lives in artemisinin-treated patients. With isolates from Cambodia, where resistance first emerged, survival rates decreased from 13 to 49% to 0.3 to 2.4% after the removal of K13 mutations. Conversely, survival rates in wild-type parasites increased from ≤0.6% to 2 to 29% after the insertion of K13 mutations. These mutations conferred elevated resistance to recent Cambodian isolates compared with that of reference lines, suggesting a contemporary contribution of additional genetic factors. Our data provide a conclusive rationale for worldwide K13-propeller sequencing to identify and eliminate artemisinin-resistant parasites.

Proceedings ArticleDOI
TL;DR: The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds.
Abstract: The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover approximately 1,000 small planets with R(sub p) less than 4 (solar radius) and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).

Journal ArticleDOI
TL;DR: The purpose of this commentary is to briefly review the data on the health impacts of fossil-fuel pollution, highlighting the neurodevelopmental impacts, and to briefly describe available means to achieve a low-carbon economy, and some examples of interventions that have benefited health and the economy.
Abstract: Fossil-fuel combustion by-products are the world’s most significant threat to children’s health and future and are major contributors to global inequality and environmental injustice. The emissions include a myriad of toxic air pollutants and carbon dioxide (CO2), which is the most important human-produced climate-altering greenhouse gas. Synergies between air pollution and climate change can magnify the harm to children. Impacts include impairment of cognitive and behavioral development, respiratory illness, and other chronic diseases—all of which may be “seeded“ in utero and affect health and functioning immediately and over the life course. By impairing children’s health, ability to learn, and potential to contribute to society, pollution and climate change cause children to become less resilient and the communities they live in to become less equitable. The developing fetus and young child are disproportionately affected by these exposures because of their immature defense mechanisms and rapid development, especially those in low- and middle-income countries where poverty and lack of resources compound the effects. No country is spared, however: even high-income countries, especially low-income communities and communities of color within them, are experiencing impacts of fossil fuel-related pollution, climate change and resultant widening inequality and environmental injustice. Global pediatric health is at a tipping point, with catastrophic consequences in the absence of bold action. Fortunately, technologies and interventions are at hand to reduce and prevent pollution and climate change, with large economic benefits documented or predicted. All cultures and communities share a concern for the health and well-being of present and future children: this shared value provides a politically powerful lever for action. The purpose of this commentary is to briefly review the data on the health impacts of fossil-fuel pollution, highlighting the neurodevelopmental impacts, and to briefly describe available means to achieve a low-carbon economy, and some examples of interventions that have benefited health and the economy.

Posted Content
TL;DR: The authors used a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors and showed that adding these context vectors (CoVe) improved performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks.
Abstract: Computer vision has benefited from initializing multiple deep layers with weights pretrained on large supervised training sets like ImageNet. Natural language processing (NLP) typically sees initialization of only the lowest layer of deep models with pretrained word vectors. In this paper, we use a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation (MT) to contextualize word vectors. We show that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors on a wide variety of common NLP tasks: sentiment analysis (SST, IMDb), question classification (TREC), entailment (SNLI), and question answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe improves performance of our baseline models to the state of the art.

Proceedings Article
04 Nov 2016
TL;DR: In this paper, the authors propose to augment deep neural networks with a small "detector" subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations.
Abstract: Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to a human. In this work, we propose to augment deep neural networks with a small "detector" subnetwork which is trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. Our method is orthogonal to prior work on addressing adversarial perturbations, which has mostly focused on making the classification network itself more robust. We show empirically that adversarial perturbations can be detected surprisingly well even though they are quasi-imperceptible to humans. Moreover, while the detectors have been trained to detect only a specific adversary, they generalize to similar and weaker adversaries. In addition, we propose an adversarial attack that fools both the classifier and the detector and a novel training procedure for the detector that counteracts this attack.

Proceedings Article
01 Jan 2016
TL;DR: Motivated by the Bayesian model criticism framework, MMD-critic is developed, which efficiently learns prototypes and criticism, designed to aid human interpretability.
Abstract: Example-based explanations are widely used in the effort to improve the interpretability of highly complex distributions. However, prototypes alone are rarely sufficient to represent the gist of the complexity. In order for users to construct better mental models and understand complex data distributions, we also need {\em criticism} to explain what are \textit{not} captured by prototypes. Motivated by the Bayesian model criticism framework, we develop \texttt{MMD-critic} which efficiently learns prototypes and criticism, designed to aid human interpretability. A human subject pilot study shows that the \texttt{MMD-critic} selects prototypes and criticism that are useful to facilitate human understanding and reasoning. We also evaluate the prototypes selected by \texttt{MMD-critic} via a nearest prototype classifier, showing competitive performance compared to baselines.