scispace - formally typeset
Search or ask a question
Browse all papers

Posted Content
TL;DR: In this article, a Laplacian pyramid of GANs is used to generate images in a coarse-to-fine fashion, where a separate GAN model is trained at each level of the pyramid.
Abstract: In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (Goodfellow et al.). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40% of the time, compared to 10% for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.

854 citations


Journal ArticleDOI
TL;DR: The aim of this review is to summarize the incidence, prevalence, trend in mortality, and general prognosis of coronary heart disease and a related condition, acute coronary syndrome, and identify risk groups and areas for possible improvement.
Abstract: The aim of this review is to summarize the incidence, prevalence, trend in mortality, and general prognosis of coronary heart disease (CHD) and a related condition, acute coronary syndrome (ACS). Although CHD mortality has gradually declined over the last decades in western countries, this condition still causes about one-third of all deaths in people older than 35 years. This evidence, along with the fact that mortality from CHD is expected to continue increasing in developing countries, illustrates the need for implementing effective primary prevention approaches worldwide and identifying risk groups and areas for possible improvement.

854 citations


Proceedings Article
19 Jun 2016
TL;DR: This work proposes an alternative approach that moves the computational burden to a learning stage and trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image.
Abstract: Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods require a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.

854 citations


Journal ArticleDOI
TL;DR: In this paper, the influence of selective laser melting (SLM) process parameters (laser power, scan speed, scan spacing, and island size) on the porosity development in AlSi10Mg alloy builds has been investigated, using statistical design of experimental approach, correlated with the energy density model.

854 citations


Journal ArticleDOI
29 Apr 2016-Science
TL;DR: It is demonstrated that protein phase separation can create a distinct physical and biochemical compartment that facilitates signaling and promote signaling outputs both in vitro and in human Jurkat T cells.
Abstract: Activation of various cell surface receptors triggers the reorganization of downstream signaling molecules into micrometer- or submicrometer-sized clusters. However, the functional consequences of such clustering have been unclear. We biochemically reconstituted a 12-component signaling pathway on model membranes, beginning with T cell receptor (TCR) activation and ending with actin assembly. When TCR phosphorylation was triggered, downstream signaling proteins spontaneously separated into liquid-like clusters that promoted signaling outputs both in vitro and in human Jurkat T cells. Reconstituted clusters were enriched in kinases but excluded phosphatases and enhanced actin filament assembly by recruiting and organizing actin regulators. These results demonstrate that protein phase separation can create a distinct physical and biochemical compartment that facilitates signaling.

853 citations


Posted Content
TL;DR: In this article, the authors provide a conceptual review of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks, and highlight a number of important applications and directions for future work.
Abstract: Machine learning on graphs is an important and ubiquitous task with applications ranging from drug design to friendship recommendation in social networks. The primary challenge in this domain is finding a way to represent, or encode, graph structure so that it can be easily exploited by machine learning models. Traditionally, machine learning approaches relied on user-defined heuristics to extract features encoding structural information about a graph (e.g., degree statistics or kernel functions). However, recent years have seen a surge in approaches that automatically learn to encode graph structure into low-dimensional embeddings, using techniques based on deep learning and nonlinear dimensionality reduction. Here we provide a conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks. We review methods to embed individual nodes as well as approaches to embed entire (sub)graphs. In doing so, we develop a unified framework to describe these recent approaches, and we highlight a number of important applications and directions for future work.

853 citations


Proceedings ArticleDOI
21 Sep 2015
TL;DR: The WIKIQA dataset is described, a new publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering, which is more than an order of magnitude larger than the previous dataset.
Abstract: We describe the WIKIQA dataset, a new publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. Most previous work on answer sentence selection focuses on a dataset created using the TREC-QA data, which includes editor-generated questions and candidate answer sentences selected by matching content words in the question. WIKIQA is constructed using a more natural process and is more than an order of magnitude larger than the previous dataset. In addition, the WIKIQA dataset also includes questions for which there are no correct sentences, enabling researchers to work on answer triggering, a critical component in any QA system. We compare several systems on the task of answer sentence selection on both datasets and also describe the performance of a system on the problem of answer triggering using the WIKIQA dataset.

853 citations


Proceedings Article
24 May 2019
TL;DR: This paper introduces a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data.
Abstract: Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard off-policy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning with data uncorrelated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.

853 citations


Posted Content
TL;DR: A neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps, that improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.
Abstract: Sequential data often possesses a hierarchical structure with complex dependencies between subsequences, such as found between the utterances in a dialogue. In an effort to model this kind of generative process, we propose a neural network-based generative architecture, with latent stochastic variables that span a variable number of time steps. We apply the proposed model to the task of dialogue response generation and compare it with recent neural network architectures. We evaluate the model performance through automatic evaluation metrics and by carrying out a human evaluation. The experiments demonstrate that our model improves upon recently proposed models and that the latent variables facilitate the generation of long outputs and maintain the context.

853 citations


Journal ArticleDOI
TL;DR: In this paper, Long-Term Temporal Convolutional Neural Networks (LTCNNs) were used to learn action representations with high-quality optical flow vector fields and achieved state-of-the-art results on two challenging benchmarks for action recognition.
Abstract: Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7%) and HMDB51 (67.2%).

853 citations


Journal ArticleDOI
TL;DR: The experimental results confirm the efficiency of the proposed approaches in improving the classification accuracy compared to other wrapper-based algorithms, which insures the ability of WOA algorithm in searching the feature space and selecting the most informative attributes for classification tasks.

Proceedings ArticleDOI
01 Jul 2017
TL;DR: A deep detail network is proposed to directly reduce the mapping range from input to output, which makes the learning process easier and significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures.
Abstract: We propose a new deep network architecture for removing rain streaks from individual images based on the deep convolutional neural network (CNN). Inspired by the deep residual network (ResNet) that simplifies the learning process by changing the mapping form, we propose a deep detail network to directly reduce the mapping range from input to output, which makes the learning process easier. To further improve the de-rained result, we use a priori image domain knowledge by focusing on high frequency detail during training, which removes background interference and focuses the model on the structure of rain in images. This demonstrates that a deep architecture not only has benefits for high-level vision tasks but also can be used to solve low-level imaging problems. Though we train the network on synthetic data, we find that the learned network generalizes well to real-world test images. Experiments show that the proposed method significantly outperforms state-of-the-art methods on both synthetic and real-world images in terms of both qualitative and quantitative measures. We discuss applications of this structure to denoising and JPEG artifact reduction at the end of the paper.

Journal ArticleDOI
TL;DR: This work reviews the state-of-the-art techniques for controlling portable active lower limb prosthetic and orthotic P/O devices in the context of locomotive activities of daily living (ADL), and considers how these can be interfaced with the user’s sensory-motor control system.
Abstract: Technological advancements have led to the development of numerous wearable robotic devices for the physical assistance and restoration of human locomotion. While many challenges remain with respect to the mechanical design of such devices, it is at least equally challenging and important to develop strategies to control them in concert with the intentions of the user. This work reviews the state-of-the-art techniques for controlling portable active lower limb prosthetic and orthotic (P/O) devices in the context of locomotive activities of daily living (ADL), and considers how these can be interfaced with the user’s sensory-motor control system. This review underscores the practical challenges and opportunities associated with P/O control, which can be used to accelerate future developments in this field. Furthermore, this work provides a classification scheme for the comparison of the various control strategies. As a novel contribution, a general framework for the control of portable gait-assistance devices is proposed. This framework accounts for the physical and informatic interactions between the controller, the user, the environment, and the mechanical device itself. Such a treatment of P/Os – not as independent devices, but as actors within an ecosystem – is suggested to be necessary to structure the next generation of intelligent and multifunctional controllers. Each element of the proposed framework is discussed with respect to the role that it plays in the assistance of locomotion, along with how its states can be sensed as inputs to the controller. The reviewed controllers are shown to fit within different levels of a hierarchical scheme, which loosely resembles the structure and functionality of the nominal human central nervous system (CNS). Active and passive safety mechanisms are considered to be central aspects underlying all of P/O design and control, and are shown to be critical for regulatory approval of such devices for real-world use. The works discussed herein provide evidence that, while we are getting ever closer, significant challenges still exist for the development of controllers for portable powered P/O devices that can seamlessly integrate with the user’s neuromusculoskeletal system and are practical for use in locomotive ADL.

Journal ArticleDOI
TL;DR: This Letter interprets the process of encoding inputs in a quantum state as a nonlinear feature map that maps data to quantum Hilbert space and shows how it opens up a new avenue for the design of quantum machine learning algorithms.
Abstract: A basic idea of quantum computing is surprisingly similar to that of kernel methods in machine learning, namely, to efficiently perform computations in an intractably large Hilbert space. In this Letter we explore some theoretical foundations of this link and show how it opens up a new avenue for the design of quantum machine learning algorithms. We interpret the process of encoding inputs in a quantum state as a nonlinear feature map that maps data to quantum Hilbert space. A quantum computer can now analyze the input data in this feature space. Based on this link, we discuss two approaches for building a quantum model for classification. In the first approach, the quantum device estimates inner products of quantum states to compute a classically intractable kernel. The kernel can be fed into any classical kernel method such as a support vector machine. In the second approach, we use a variational quantum circuit as a linear model that classifies data explicitly in Hilbert space. We illustrate these ideas with a feature map based on squeezing in a continuous-variable system, and visualize the working principle with two-dimensional minibenchmark datasets.

Journal ArticleDOI
01 Apr 2015-Pain
TL;DR: Although significant variability remains in this literature, this review provides guidance regarding possible average rates of opioid misuse and addiction and also highlights areas in need of further clarification.
Abstract: Opioid use in chronic pain treatment is complex, as patients may derive both benefit and harm. Identification of individuals currently using opioids in a problematic way is important given the substantial recent increases in prescription rates and consequent increases in morbidity and mortality. The present review provides updated and expanded information regarding rates of problematic opioid use in chronic pain. Because previous reviews have indicated substantial variability in this literature, several steps were taken to enhance precision and utility. First, problematic use was coded using explicitly defined terms, referring to different patterns of use (ie, misuse, abuse, and addiction). Second, average prevalence rates were calculated and weighted by sample size and study quality. Third, the influence of differences in study methodology was examined. In total, data from 38 studies were included. Rates of problematic use were quite broad, ranging from <1% to 81% across studies. Across most calculations, rates of misuse averaged between 21% and 29% (range, 95% confidence interval [CI]: 13%-38%). Rates of addiction averaged between 8% and 12% (range, 95% CI: 3%-17%). Abuse was reported in only a single study. Only 1 difference emerged when study methods were examined, where rates of addiction were lower in studies that identified prevalence assessment as a primary, rather than secondary, objective. Although significant variability remains in this literature, this review provides guidance regarding possible average rates of opioid misuse and addiction and also highlights areas in need of further clarification.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive review of the literature in supply chain risk management (SCRM) in the past decade is presented and a detailed review associated with research developments in SCRM, including risk definitions, risk types, risk factors and risk management/mitigation strategies.
Abstract: Risk management plays a vital role in effectively operating supply chains in the presence of a variety of uncertainties. Over the years, many researchers have focused on supply chain risk management (SCRM) by contributing in the areas of defining, operationalising and mitigating risks. In this paper, we review and synthesise the extant literature in SCRM in the past decade in a comprehensive manner. The purpose of this paper is threefold. First, we present and categorise SCRM research appearing between 2003 and 2013. Second, we undertake a detailed review associated with research developments in supply chain risk definitions, risk types, risk factors and risk management/mitigation strategies. Third, we analyse the SCRM literature in exploring potential gaps.

Journal ArticleDOI
12 May 2016-Nature
TL;DR: It is found that genes that were retained as duplicates after the teleost-specific whole-genome duplication 320 million years ago were not more likely to be retained after the Ss4R, and that the duplicate retention was not influenced to a great extent by the nature of the predicted protein interactions of the gene products.
Abstract: The whole-genome duplication 80 million years ago of the common ancestor of salmonids (salmonid-specific fourth vertebrate whole-genome duplication, Ss4R) provides unique opportunities to learn about the evolutionary fate of a duplicated vertebrate genome in 70 extant lineages. Here we present a high-quality genome assembly for Atlantic salmon (Salmo salar), and show that large genomic reorganizations, coinciding with bursts of transposon-mediated repeat expansions, were crucial for the post-Ss4R rediploidization process. Comparisons of duplicate gene expression patterns across a wide range of tissues with orthologous genes from a pre-Ss4R outgroup unexpectedly demonstrate far more instances of neofunctionalization than subfunctionalization. Surprisingly, we find that genes that were retained as duplicates after the teleost-specific whole-genome duplication 320 million years ago were not more likely to be retained after the Ss4R, and that the duplicate retention was not influenced to a great extent by the nature of the predicted protein interactions of the gene products. Finally, we demonstrate that the Atlantic salmon assembly can serve as a reference sequence for the study of other salmonids for a range of purposes.

Journal ArticleDOI
TL;DR: During progression and metastasis, tumor cells adapt to oxidative stress by increasing NADPH in various ways, including activation of AMPK, the PPP, and reductive glutamine and folate metabolism.

Journal ArticleDOI
TL;DR: A large tracking database that offers an unprecedentedly wide coverage of common moving objects in the wild, called GOT-10k, and the first video trajectory dataset that uses the semantic hierarchy of WordNet to guide class population, which ensures a comprehensive and relatively unbiased coverage of diverse moving objects.
Abstract: We introduce here a large tracking database that offers an unprecedentedly wide coverage of common moving objects in the wild, called GOT-10k. Specifically, GOT-10k is built upon the backbone of WordNet structure [1] and it populates the majority of over 560 classes of moving objects and 87 motion patterns, magnitudes wider than the most recent similar-scale counterparts [19] , [20] , [23] , [26] . By releasing the large high-diversity database, we aim to provide a unified training and evaluation platform for the development of class-agnostic, generic purposed short-term trackers. The features of GOT-10k and the contributions of this article are summarized in the following. (1) GOT-10k offers over 10,000 video segments with more than 1.5 million manually labeled bounding boxes, enabling unified training and stable evaluation of deep trackers. (2) GOT-10k is by far the first video trajectory dataset that uses the semantic hierarchy of WordNet to guide class population, which ensures a comprehensive and relatively unbiased coverage of diverse moving objects. (3) For the first time, GOT-10k introduces the one-shot protocol for tracker evaluation, where the training and test classes are zero-overlapped . The protocol avoids biased evaluation results towards familiar objects and it promotes generalization in tracker development. (4) GOT-10k offers additional labels such as motion classes and object visible ratios, facilitating the development of motion-aware and occlusion-aware trackers. (5) We conduct extensive tracking experiments with 39 typical tracking algorithms and their variants on GOT-10k and analyze their results in this paper. (6) Finally, we develop a comprehensive platform for the tracking community that offers full-featured evaluation toolkits, an online evaluation server, and a responsive leaderboard. The annotations of GOT-10k’s test data are kept private to avoid tuning parameters on it.

Journal ArticleDOI
02 Mar 2016-Neuron
TL;DR: In this paper, the authors defined patterns of tau tracer retention in normal aging in relation to age, cognition, and β-amyloid deposition, and found that older age was associated with increased tracers retention in regions of the medial temporal lobe, which predicted worse episodic memory performance.

Journal ArticleDOI
TL;DR: Hidden in Plain Sight Diagnostic algorithms and practice guidelines that adjust or “correct” their outputs on the basis of a patient’s race or ethnicity guide decisions in ways that may direct more decisions about treatment.
Abstract: Hidden in Plain Sight Diagnostic algorithms and practice guidelines that adjust or “correct” their outputs on the basis of a patient’s race or ethnicity guide decisions in ways that may direct more...

Journal ArticleDOI
08 May 2015-Science
TL;DR: Recent advances in understanding global soil resources, including how carbon stored in soil responds to anthropogenic warming are reviewed, reveal the severity of soil-related issues at stake for the remainder of this century and the need to rapidly regain a balance to the physical and biological processes that drive and maintain soil properties.
Abstract: Human security has and will continue to rely on Earth's diverse soil resources. Yet we have now exploited the planet's most productive soils. Soil erosion greatly exceeds rates of production in many agricultural regions. Nitrogen produced by fossil fuel and geological reservoirs of other fertilizers are headed toward possible scarcity, increased cost, and/or geopolitical conflict. Climate change is accelerating the microbial release of greenhouse gases from soil organic matter and will likely play a large role in our near-term climate future. In this Review, we highlight challenges facing Earth's soil resources in the coming century. The direct and indirect response of soils to past and future human activities will play a major role in human prosperity and survival.

Journal ArticleDOI
TL;DR: Variational inference (VI), a method from machine learning that approximates probability densities through optimization, is reviewed and a variant that uses stochastic optimization to scale up to massive data is derived.
Abstract: One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.

Journal ArticleDOI
24 Apr 2020
TL;DR: The preliminary findings of this study suggest that the higher CQ dosage should not be recommended for critically ill patients with COVID-19 because of its potential safety hazards, especially when taken concurrently with azithromycin and oseltamivir.
Abstract: Importance There is no specific antiviral therapy recommended for coronavirus disease 2019 (COVID-19). In vitro studies indicate that the antiviral effect of chloroquine diphosphate (CQ) requires a high concentration of the drug. Objective To evaluate the safety and efficacy of 2 CQ dosages in patients with severe COVID-19. Design, Setting, and Participants This parallel, double-masked, randomized, phase IIb clinical trial with 81 adult patients who were hospitalized with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection was conducted from March 23 to April 5, 2020, at a tertiary care facility in Manaus, Brazilian Amazon. Interventions Patients were allocated to receive high-dosage CQ (ie, 600 mg CQ twice daily for 10 days) or low-dosage CQ (ie, 450 mg twice daily on day 1 and once daily for 4 days). Main Outcomes and Measures Primary outcome was reduction in lethality by at least 50% in the high-dosage group compared with the low-dosage group. Data presented here refer primarily to safety and lethality outcomes during treatment on day 13. Secondary end points included participant clinical status, laboratory examinations, and electrocardiogram results. Outcomes will be presented to day 28. Viral respiratory secretion RNA detection was performed on days 0 and 4. Results Out of a predefined sample size of 440 patients, 81 were enrolled (41 [50.6%] to high-dosage group and 40 [49.4%] to low-dosage group). Enrolled patients had a mean (SD) age of 51.1 (13.9) years, and most (60 [75.3%]) were men. Older age (mean [SD] age, 54.7 [13.7] years vs 47.4 [13.3] years) and more heart disease (5 of 28 [17.9%] vs 0) were seen in the high-dose group. Viral RNA was detected in 31 of 40 (77.5%) and 31 of 41 (75.6%) patients in the low-dosage and high-dosage groups, respectively. Lethality until day 13 was 39.0% in the high-dosage group (16 of 41) and 15.0% in the low-dosage group (6 of 40). The high-dosage group presented more instance of QTc interval greater than 500 milliseconds (7 of 37 [18.9%]) compared with the low-dosage group (4 of 36 [11.1%]). Respiratory secretion at day 4 was negative in only 6 of 27 patients (22.2%). Conclusions and Relevance The preliminary findings of this study suggest that the higher CQ dosage should not be recommended for critically ill patients with COVID-19 because of its potential safety hazards, especially when taken concurrently with azithromycin and oseltamivir. These findings cannot be extrapolated to patients with nonsevere COVID-19. Trial Registration ClinicalTrials.gov Identifier:NCT04323527

Proceedings ArticleDOI
26 Apr 2016
TL;DR: This work presents a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks with CNNs and proposes a Domain Guided Dropout algorithm to improve the feature learning procedure.
Abstract: Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.

Journal ArticleDOI
TL;DR: It is suggested that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses, and its effect seems to be weak relative to the real effect sizes being measured.
Abstract: A focus on novel, confirmatory, and statistically significant results leads to substantial bias in the scientific literature. One type of bias, known as “p-hacking,” occurs when researchers collect or select data or statistical analyses until nonsignificant results become significant. Here, we use text-mining to demonstrate that p-hacking is widespread throughout science. We then illustrate how one can test for p-hacking when performing a meta-analysis and show that, while p-hacking is probably common, its effect seems to be weak relative to the real effect sizes being measured. This result suggests that p-hacking probably does not drastically alter scientific consensuses drawn from meta-analyses.

Journal ArticleDOI
TL;DR: It is revealed that metasurfaces created by seemingly different lattices of (dielectric or metallic) meta-atoms with broken in-plane symmetry can support sharp high-Q resonances arising from a distortion of symmetry-protected bound states in the continuum.
Abstract: We reveal that metasurfaces created by seemingly different lattices of (dielectric or metallic) meta-atoms with broken in-plane symmetry can support sharp high-$Q$ resonances arising from a distortion of symmetry-protected bound states in the continuum. We develop a rigorous theory of such asymmetric periodic structures and demonstrate a link between the bound states in the continuum and Fano resonances. Our results suggest the way for smart engineering of resonances in metasurfaces for many applications in nanophotonics and metaoptics.

Journal ArticleDOI
TL;DR: The authors define interpretability in the context of machine learning and introduce the predictive, descriptive, relevant (PDR) framework for discussing interpretations, with three overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevance judged relative to a human audience.
Abstract: Machine-learning models have demonstrated great success in learning complex patterns that enable them to make predictions about unobserved data. In addition to using models for prediction, the ability to interpret what a model has learned is receiving an increasing amount of attention. However, this increased focus has led to considerable confusion about the notion of interpretability. In particular, it is unclear how the wide array of proposed interpretation methods are related and what common concepts can be used to evaluate them. We aim to address these concerns by defining interpretability in the context of machine learning and introducing the predictive, descriptive, relevant (PDR) framework for discussing interpretations. The PDR framework provides 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy, with relevancy judged relative to a human audience. Moreover, to help manage the deluge of interpretation methods, we introduce a categorization of existing techniques into model-based and post hoc categories, with subgroups including sparsity, modularity, and simulatability. To demonstrate how practitioners can use the PDR framework to evaluate and understand interpretations, we provide numerous real-world examples. These examples highlight the often underappreciated role played by human audiences in discussions of interpretability. Finally, based on our framework, we discuss limitations of existing methods and directions for future work. We hope that this work will provide a common vocabulary that will make it easier for both practitioners and researchers to discuss and choose from the full range of interpretation methods.

Journal ArticleDOI
TL;DR: The authors' results indicate that antiviral antibodies against SARS-CoV-2 did not decline within 4 months after diagnosis, and antiviral antibody titers assayed by two pan-Ig assays remained on a plateau for the remainder of the study.
Abstract: Background Little is known about the nature and durability of the humoral immune response to infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Methods We measure...

Proceedings Article
15 Feb 2018
TL;DR: In this paper, the authors propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flows.
Abstract: Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large scale road network traffic datasets and observe consistent improvement of 12% - 15% over state-of-the-art baselines.