scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The prevalence of developmental disability among US children aged 3 to 17 years increased between 2009 and 2017, and changes by demographic and socioeconomic subgroups may be related to improvements in awareness and access to health care.
Abstract: OBJECTIVES: To study the national prevalence of 10 developmental disabilities in US children aged 3 to 17 years and explore changes over time by associated demographic and socioeconomic characteristics, using the National Health Interview Survey. METHODS: Data come from the 2009 to 2017 National Health Interview Survey, a nationally representative survey of the civilian noninstitutionalized population. Parents reported physician or other health care professional diagnoses of attention-deficit/hyperactivity disorder; autism spectrum disorder; blindness; cerebral palsy; moderate to profound hearing loss; learning disability; intellectual disability; seizures; stuttering or stammering; and other developmental delays. Weighted percentages for each of the selected developmental disabilities and any developmental disability were calculated and stratified by demographic and socioeconomic characteristics. RESULTS: From 2009 to 2011 and 2015 to 2017, there were overall significant increases in the prevalence of any developmental disability (16.2%–17.8%, P CONCLUSIONS: The prevalence of developmental disability among US children aged 3 to 17 years increased between 2009 and 2017. Changes by demographic and socioeconomic subgroups may be related to improvements in awareness and access to health care.

574 citations


Journal ArticleDOI
TL;DR: The prevalence of dementia in the United States declined significantly between 2000 and 2012, and an increase in educational attainment was associated with some of the decline in dementia prevalence, but the full set of social, behavioral, and medical factors contributing to the decline is still uncertain.
Abstract: Importance The aging of the US population is expected to lead to a large increase in the number of adults with dementia, but some recent studies in the United States and other high-income countries suggest that the age-specific risk of dementia may have declined over the past 25 years. Clarifying current and future population trends in dementia prevalence and risk has important implications for patients, families, and government programs. Objective To compare the prevalence of dementia in the United States in 2000 and 2012. Design, Setting, and Participants We used data from the Health and Retirement Study (HRS), a nationally representative, population-based longitudinal survey of individuals in the United States 65 years or older from the 2000 (n = 10 546) and 2012 (n = 10 511) waves of the HRS. Main Outcomes and Measures Dementia was identified in each year using HRS cognitive measures and validated methods for classifying self-respondents, as well as those represented by a proxy. Logistic regression was used to identify socioeconomic and health variables associated with change in dementia prevalence between 2000 and 2012. Results The study cohorts had an average age of 75.0 years (95% CI, 74.8-75.2 years) in 2000 and 74.8 years (95% CI, 74.5-75.1 years) in 2012 ( P = .24); 58.4% (95% CI, 57.3%-59.4%) of the 2000 cohort was female compared with 56.3% (95% CI, 55.5%-57.0%) of the 2012 cohort ( P P P Conclusions and Relevance The prevalence of dementia in the United States declined significantly between 2000 and 2012. An increase in educational attainment was associated with some of the decline in dementia prevalence, but the full set of social, behavioral, and medical factors contributing to the decline is still uncertain. Continued monitoring of trends in dementia incidence and prevalence will be important for better gauging the full future societal impact of dementia as the number of older adults increases in the decades ahead.

574 citations


Journal ArticleDOI
TL;DR: Menopausal hormone therapy (MHT) is the most effective treatment for vasomotor symptoms and other symptoms of the climacteric and benefits may exceed risks for the majority of symptomatic postmenopausal women who are under age 60 or under 10 years since the onset of menopause.
Abstract: Objective: The objective of this document is to generate a practice guideline for the management and treatment of symptoms of the menopause. Participants: The Treatment of Symptoms of the Menopause Task Force included six experts, a methodologist, and a medical writer, all appointed by The Endocrine Society. Evidence: The Task Force developed this evidenced-based guideline using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) system to describe the strength of recommendations and the quality of evidence. The Task Force commissioned three systematic reviews of published data and considered several other existing meta-analyses and trials. Consensus Process: Multiple e-mail communications, conference calls, and one face-to-face meeting determined consensus. Committees of The Endocrine Society, representatives from endorsing societies, and members of The Endocrine Society reviewed and commented on the drafts of the guidelines. The Australasian Menopause Society, the British Men...

574 citations


Journal ArticleDOI
02 Aug 2016-JAMA
TL;DR: Policy makers should build on progress made by the Affordable Care Act by continuing to implement the Health Insurance Marketplaces and delivery system reform, increasing federal financial assistance for Marketplace enrollees, introducing a public plan option in areas lacking individual market competition, and taking actions to reduce prescription drug costs.
Abstract: Importance The Affordable Care Act is the most important health care legislation enacted in the United States since the creation of Medicare and Medicaid in 1965. The law implemented comprehensive reforms designed to improve the accessibility, affordability, and quality of health care. Objectives To review the factors influencing the decision to pursue health reform, summarize evidence on the effects of the law to date, recommend actions that could improve the health care system, and identify general lessons for public policy from the Affordable Care Act. Evidence Analysis of publicly available data, data obtained from government agencies, and published research findings. The period examined extends from 1963 to early 2016. Findings The Affordable Care Act has made significant progress toward solving long-standing challenges facing the US health care system related to access, affordability, and quality of care. Since the Affordable Care Act became law, the uninsured rate has declined by 43%, from 16.0% in 2010 to 9.1% in 2015, primarily because of the law’s reforms. Research has documented accompanying improvements in access to care (for example, an estimated reduction in the share of nonelderly adults unable to afford care of 5.5 percentage points), financial security (for example, an estimated reduction in debts sent to collection of $600-$1000 per person gaining Medicaid coverage), and health (for example, an estimated reduction in the share of nonelderly adults reporting fair or poor health of 3.4 percentage points). The law has also begun the process of transforming health care payment systems, with an estimated 30% of traditional Medicare payments now flowing through alternative payment models like bundled payments or accountable care organizations. These and related reforms have contributed to a sustained period of slow growth in per-enrollee health care spending and improvements in health care quality. Despite this progress, major opportunities to improve the health care system remain. Conclusions and Relevance Policy makers should build on progress made by the Affordable Care Act by continuing to implement the Health Insurance Marketplaces and delivery system reform, increasing federal financial assistance for Marketplace enrollees, introducing a public plan option in areas lacking individual market competition, and taking actions to reduce prescription drug costs. Although partisanship and special interest opposition remain, experience with the Affordable Care Act demonstrates that positive change is achievable on some of the nation’s most complex challenges.

573 citations


Journal ArticleDOI
29 May 2020-BMJ
TL;DR: Patients admitted to hospital with covid-19 at this medical center faced major morbidity and mortality, with high rates of acute kidney injury and inpatient dialysis, prolonged intubations, and a bimodal distribution of time to intubation from symptom onset.
Abstract: Objective To characterize patients with coronavirus disease 2019 (covid-19) in a large New York City medical center and describe their clinical course across the emergency department, hospital wards, and intensive care units. Design Retrospective manual medical record review. Setting NewYork-Presbyterian/Columbia University Irving Medical Center, a quaternary care academic medical center in New York City. Participants The first 1000 consecutive patients with a positive result on the reverse transcriptase polymerase chain reaction assay for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) who presented to the emergency department or were admitted to hospital between 1 March and 5 April 2020. Patient data were manually abstracted from electronic medical records. Main outcome measures Characterization of patients, including demographics, presenting symptoms, comorbidities on presentation, hospital course, time to intubation, complications, mortality, and disposition. Results Of the first 1000 patients, 150 presented to the emergency department, 614 were admitted to hospital (not intensive care units), and 236 were admitted or transferred to intensive care units. The most common presenting symptoms were cough (732/1000), fever (728/1000), and dyspnea (631/1000). Patients in hospital, particularly those treated in intensive care units, often had baseline comorbidities including hypertension, diabetes, and obesity. Patients admitted to intensive care units were older, predominantly male (158/236, 66.9%), and had long lengths of stay (median 23 days, interquartile range 12-32 days); 78.0% (184/236) developed acute kidney injury and 35.2% (83/236) needed dialysis. Only 4.4% (6/136) of patients who required mechanical ventilation were first intubated more than 14 days after symptom onset. Time to intubation from symptom onset had a bimodal distribution, with modes at three to four days, and at nine days. As of 30 April, 90 patients remained in hospital and 211 had died in hospital. Conclusions Patients admitted to hospital with covid-19 at this medical center faced major morbidity and mortality, with high rates of acute kidney injury and inpatient dialysis, prolonged intubations, and a bimodal distribution of time to intubation from symptom onset.

573 citations


Journal ArticleDOI
TL;DR: Recent laboratory and clinical evidence showed that in addition to microvascular changes, inflammation and retinal neurodegeneration may contribute to diabetic retinal damage in the early stages of DR.
Abstract: Diabetic retinopathy (DR) is the most common complication of diabetes mellitus (DM). It has long been recognized as a microvascular disease. The diagnosis of DR relies on the detection of microvascular lesions. The treatment of DR remains challenging. The advent of anti-vascular endothelial growth factor (VEGF) therapy demonstrated remarkable clinical benefits in DR patients; however, the majority of patients failed to achieve clinically-significant visual improvement. Therefore, there is an urgent need for the development of new treatments. Laboratory and clinical evidence showed that in addition to microvascular changes, inflammation and retinal neurodegeneration may contribute to diabetic retinal damage in the early stages of DR. Further investigation of the underlying molecular mechanisms may provide targets for the development of new early interventions. Here, we present a review of the current understanding and new insights into pathophysiology in DR, as well as clinical treatments for DR patients. Recent laboratory findings and related clinical trials are also reviewed.

573 citations


Journal ArticleDOI
TL;DR: A timescale for early land plant evolution that integrates over topological uncertainty by exploring the impact of competing hypotheses on bryophyte−tracheophyte relationships, among other variables, on divergence time estimation is established.
Abstract: Establishing the timescale of early land plant evolution is essential for testing hypotheses on the coevolution of land plants and Earth's System. The sparseness of early land plant megafossils and stratigraphic controls on their distribution make the fossil record an unreliable guide, leaving only the molecular clock. However, the application of molecular clock methodology is challenged by the current impasse in attempts to resolve the evolutionary relationships among the living bryophytes and tracheophytes. Here, we establish a timescale for early land plant evolution that integrates over topological uncertainty by exploring the impact of competing hypotheses on bryophyte-tracheophyte relationships, among other variables, on divergence time estimation. We codify 37 fossil calibrations for Viridiplantae following best practice. We apply these calibrations in a Bayesian relaxed molecular clock analysis of a phylogenomic dataset encompassing the diversity of Embryophyta and their relatives within Viridiplantae. Topology and dataset sizes have little impact on age estimates, with greater differences among alternative clock models and calibration strategies. For all analyses, a Cambrian origin of Embryophyta is recovered with highest probability. The estimated ages for crown tracheophytes range from Late Ordovician to late Silurian. This timescale implies an early establishment of terrestrial ecosystems by land plants that is in close accord with recent estimates for the origin of terrestrial animal lineages. Biogeochemical models that are constrained by the fossil record of early land plants, or attempt to explain their impact, must consider the implications of a much earlier, middle Cambrian-Early Ordovician, origin.

573 citations


Posted Content
TL;DR: This protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner, and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network.
Abstract: We design a novel, communication-efficient, failure-robust protocol for secure aggregation of high-dimensional data. Our protocol allows a server to compute the sum of large, user-held data vectors from mobile devices in a secure manner (i.e. without learning each user's individual contribution), and can be used, for example, in a federated learning setting, to aggregate user-provided model updates for a deep neural network. We prove the security of our protocol in the honest-but-curious and active adversary settings, and show that security is maintained even if an arbitrarily chosen subset of users drop out at any time. We evaluate the efficiency of our protocol and show, by complexity analysis and a concrete implementation, that its runtime and communication overhead remain low even on large data sets and client pools. For 16-bit input values, our protocol offers $1.73 x communication expansion for 210 users and 220-dimensional vectors, and 1.98 x expansion for 214 users and 224-dimensional vectors over sending data in the clear.

573 citations


Journal ArticleDOI
TL;DR: While gingival health and gingivitis have many clinical features, case definitions are primarily predicated on presence or absence of bleeding on probing, which creates differences in the way in which a "case" of gedival health or gingIVitis is defined for clinical practice as opposed to epidemiologically in population prevalence surveys.
Abstract: Periodontal health is defined by absence of clinically detectable inflammation. There is a biological level of immune surveillance that is consistent with clinical gingival health and homeostasis. Clinical gingival health may be found in a periodontium that is intact, i.e. without clinical attachment loss or bone loss, and on a reduced periodontium in either a non-periodontitis patient (e.g. in patients with some form of gingival recession or following crown lengthening surgery) or in a patient with a history of periodontitis who is currently periodontally stable. Clinical gingival health can be restored following treatment of gingivitis and periodontitis. However, the treated and stable periodontitis patient with current gingival health remains at increased risk of recurrent periodontitis, and accordingly, must be closely monitored. Two broad categories of gingival diseases include non-dental plaque biofilm-induced gingival diseases and dental plaque-induced gingivitis. Non-dental plaque biofilm-induced gingival diseases include a variety of conditions that are not caused by plaque and usually do not resolve following plaque removal. Such lesions may be manifestations of a systemic condition or may be localized to the oral cavity. Dental plaque-induced gingivitis has a variety of clinical signs and symptoms, and both local predisposing factors and systemic modifying factors can affect its extent, severity, and progression. Dental plaque-induced gingivitis may arise on an intact periodontium or on a reduced periodontium in either a non-periodontitis patient or in a currently stable "periodontitis patient" i.e. successfully treated, in whom clinical inflammation has been eliminated (or substantially reduced). A periodontitis patient with gingival inflammation remains a periodontitis patient (Figure 1), and comprehensive risk assessment and management are imperative to ensure early prevention and/or treatment of recurrent/progressive periodontitis. Precision dental medicine defines a patient-centered approach to care, and therefore, creates differences in the way in which a "case" of gingival health or gingivitis is defined for clinical practice as opposed to epidemiologically in population prevalence surveys. Thus, case definitions of gingival health and gingivitis are presented for both purposes. While gingival health and gingivitis have many clinical features, case definitions are primarily predicated on presence or absence of bleeding on probing. Here we classify gingival health and gingival diseases/conditions, along with a summary table of diagnostic features for defining health and gingivitis in various clinical situations.

573 citations



Journal ArticleDOI
Pengcheng Dai1
TL;DR: In this paper, an overview of the neutron scattering results on iron-based superconductors is presented, focusing on the evolution of spin excitation spectra as a function of electron/hole-doping and isoelectronic substitution.
Abstract: High-transition temperature (high-$T_c$) superconductivity in the iron pnictides/chalcogenides emerges from the suppression of the static antiferromagnetic order in their parent compounds, similar to copper oxides superconductors. This raises a fundamental question concerning the role of magnetism in the superconductivity of these materials. Neutron scattering, a powerful probe to study the magnetic order and spin dynamics, plays an essential role in determining the relationship between magnetism and superconductivity in high-$T_c$ superconductors. The rapid development of modern neutron time-of-flight spectrometers allows a direct determination of the spin dynamical properties of iron-based superconductors throughout the entire Brillouin zone. In this review, we present an overview of the neutron scattering results on iron-based superconductors, focusing on the evolution of spin excitation spectra as a function of electron/hole-doping and isoelectronic substitution. We compare spin dynamical properties of iron-based superconductors with those of copper oxide and heavy fermion superconductors, and discuss the common features of spin excitations in these three families of unconventional superconductors and their relationship with superconductivity.

Book ChapterDOI
08 Sep 2018
TL;DR: A novel training loss, combining of Euclidean loss and local pattern consistency loss is proposed, which improves the performance of the model in the authors' experiments and achieves superior performance to state-of-the-art methods while with much less parameters.
Abstract: In this paper, we propose a novel encoder-decoder network, called Scale Aggregation Network (SANet), for accurate and efficient crowd counting. The encoder extracts multi-scale features with scale aggregation modules and the decoder generates high-resolution density maps by using a set of transposed convolutions. Moreover, we find that most existing works use only Euclidean loss which assumes independence among each pixel but ignores the local correlation in density maps. Therefore, we propose a novel training loss, combining of Euclidean loss and local pattern consistency loss, which improves the performance of the model in our experiments. In addition, we use normalization layers to ease the training process and apply a patch-based test scheme to reduce the impact of statistic shift problem. To demonstrate the effectiveness of the proposed method, we conduct extensive experiments on four major crowd counting datasets and our method achieves superior performance to state-of-the-art methods while with much less parameters.

Journal ArticleDOI
TL;DR: The main goal of the German Socio-Economic Panel (SOEP) is in line with the views and visions expressed in Angus Deaton's lecture as discussed by the authors, which serves a global research community by providing representative longitudinal data of private households.
Abstract: In his 2015 Economic Sciences Nobel lecture, Angus Deaton emphasized key issues for understanding welfare-enhancing policies (see Deaton 2015): First, differences in resources across individuals should be measured not only at specific points in time but also across the life course. Second, to better assess socio-economic outcomes, direct economic measures of well-being should be linked with other measures of well-being developed by other social science branches, such as sociology, demography, and psychology. Third, data observations should be reconciled with lifecycle models to investigate the causal mechanisms behind socio-economic outcomes. The main goal of the German Socio-Economic Panel (SOEP) is in line with the views and visions expressed in Angus Deaton’s lecture: Established in 1984 and located at the German Institute for Economic Research (DIW Berlin), SOEP serves a global research community by providing representative longitudinal data of private households in Germany. The SOEP’s primary research interests and the original survey concept was rooted in the multidisciplinary Collaborative Research Center SfB3, “Microanalytic Foundations of Social Policy” at the Universities of

Journal ArticleDOI
TL;DR: In this article, a detailed social-psychological model of climate change risk perceptions by combining and integrating cognitive, experiential, and socio-cultural factors was proposed, which was tested empirically on a national sample (N = ǫ808) of the UK population.

Proceedings ArticleDOI
22 May 2016
TL;DR: The transparency-privacy tradeoff is explored and it is proved that a number of useful transparency reports can be made differentially private with very little addition of noise.
Abstract: Algorithmic systems that employ machine learning play an increasing role in making substantive decisions in modern society, ranging from online personalization to insurance and credit decisions to predictive policing. But their decision-making processes are often opaque -- it is difficult to explain why a certain decision was made. We develop a formal foundation to improve the transparency of such decision-making systems. Specifically, we introduce a family of Quantitative Input Influence (QII) measures that capture the degree of influence of inputs on outputs of systems. These measures provide a foundation for the design of transparency reports that accompany system decisions (e.g., explaining a specific credit decision) and for testing tools useful for internal and external oversight (e.g., to detect algorithmic discrimination). Distinctively, our causal QII measures carefully account for correlated inputs while measuring influence. They support a general class of transparency queries and can, in particular, explain decisions about individuals (e.g., a loan decision) and groups (e.g., disparate impact based on gender). Finally, since single inputs may not always have high influence, the QII measures also quantify the joint influence of a set of inputs (e.g., age and income) on outcomes (e.g. loan decisions) and the marginal influence of individual inputs within such a set (e.g., income). Since a single input may be part of multiple influential sets, the average marginal influence of the input is computed using principled aggregation measures, such as the Shapley value, previously applied to measure influence in voting. Further, since transparency reports could compromise privacy, we explore the transparency-privacy tradeoff and prove that a number of useful transparency reports can be made differentially private with very little addition of noise. Our empirical validation with standard machine learning algorithms demonstrates that QII measures are a useful transparency mechanism when black box access to the learning system is available. In particular, they provide better explanations than standard associative measures for a host of scenarios that we consider. Further, we show that in the situations we consider, QII is efficiently approximable and can be made differentially private while preserving accuracy.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: One-shot video object segmentation (OSVOS) as mentioned in this paper is based on a fully-convolutional neural network architecture that is able to successively transfer generic semantic information, learned on ImageNet, to the task of foreground segmentation, and finally to learning the appearance of a single annotated object of the test sequence.
Abstract: This paper tackles the task of semi-supervised video object segmentation, i.e., the separation of an object from the background in a video, given the mask of the first frame. We present One-Shot Video Object Segmentation (OSVOS), based on a fully-convolutional neural network architecture that is able to successively transfer generic semantic information, learned on ImageNet, to the task of foreground segmentation, and finally to learning the appearance of a single annotated object of the test sequence (hence one-shot). Although all frames are processed independently, the results are temporally coherent and stable. We perform experiments on two annotated video segmentation databases, which show that OSVOS is fast and improves the state of the art by a significant margin (79.8% vs 68.0%).

Journal ArticleDOI
TL;DR: Purge Haplotigs improves the haploid and diploid representations of third-gen sequencing based genome assemblies by identifying and reassigning allelic contigs and is less likely to over-purge repetitive or paralogous elements compared to alignment-only based methods.
Abstract: Recent developments in third-gen long read sequencing and diploid-aware assemblers have resulted in the rapid release of numerous reference-quality assemblies for diploid genomes. However, assembly of highly heterozygous genomes is still problematic when regional heterogeneity is so high that haplotype homology is not recognised during assembly. This results in regional duplication rather than consolidation into allelic variants and can cause issues with downstream analysis, for example variant discovery, or haplotype reconstruction using the diploid assembly with unpaired allelic contigs. A new pipeline—Purge Haplotigs—was developed specifically for third-gen sequencing-based assemblies to automate the reassignment of allelic contigs, and to assist in the manual curation of genome assemblies. The pipeline uses a draft haplotype-fused assembly or a diploid assembly, read alignments, and repeat annotations to identify allelic variants in the primary assembly. The pipeline was tested on a simulated dataset and on four recent diploid (phased) de novo assemblies from third-generation long-read sequencing, and compared with a similar tool. After processing with Purge Haplotigs, haploid assemblies were less duplicated with minimal impact on genome completeness, and diploid assemblies had more pairings of allelic contigs. Purge Haplotigs improves the haploid and diploid representations of third-gen sequencing based genome assemblies by identifying and reassigning allelic contigs. The implementation is fast and scales well with large genomes, and it is less likely to over-purge repetitive or paralogous elements compared to alignment-only based methods. The software is available at https://bitbucket.org/mroachawri/purge_haplotigs under a permissive MIT licence.

Journal ArticleDOI
TL;DR: It is suggested that molecular rotations, with the consequent dynamical change of the band structure, might be at the origin of the slow carrier recombination and the superior conversion efficiency of CH3NH3PbI3.
Abstract: The hybrid halide perovskite CH3NH3PbI3 has enabled solar cells to reach an efficiency of about 20%, demonstrating a pace for improvements with no precedents in the solar energy arena. Despite such explosive progress, the microscopic origin behind the success of such material is still debated, with the role played by the organic cations in the light-harvesting process remaining unclear. Here van der Waals-corrected density functional theory calculations reveal that the orientation of the organic molecules plays a fundamental role in determining the material electronic properties. For instance, if CH3NH3 orients along a (011)-like direction, the PbI6 octahedral cage will distort and the bandgap will become indirect. Our results suggest that molecular rotations, with the consequent dynamical change of the band structure, might be at the origin of the slow carrier recombination and the superior conversion efficiency of CH3NH3PbI3.

Proceedings ArticleDOI
01 Jan 2019
TL;DR: This article propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction.
Abstract: Adversarial examples highlight model vulnerabilities and are useful for evaluation and interpretation. We define universal adversarial triggers: input-agnostic sequences of tokens that trigger a model to produce a specific prediction when concatenated to any input from a dataset. We propose a gradient-guided search over tokens which finds short trigger sequences (e.g., one word for classification and four words for language modeling) that successfully trigger the target prediction. For example, triggers cause SNLI entailment accuracy to drop from 89.94% to 0.55%, 72% of “why” questions in SQuAD to be answered “to kill american people”, and the GPT-2 language model to spew racist output even when conditioned on non-racial contexts. Furthermore, although the triggers are optimized using white-box access to a specific model, they transfer to other models for all tasks we consider. Finally, since triggers are input-agnostic, they provide an analysis of global model behavior. For instance, they confirm that SNLI models exploit dataset biases and help to diagnose heuristics learned by reading comprehension models.

Proceedings ArticleDOI
21 Jul 2017
TL;DR: This work proposes a new class of LSTM network, Global Context-Aware Attention L STM (GCA-LSTM), for 3D action recognition, which is able to selectively focus on the informative joints in the action sequence with the assistance of global contextual information.
Abstract: Long Short-Term Memory (LSTM) networks have shown superior performance in 3D human action recognition due to their power in modeling the dynamics and dependencies in sequential data. Since not all joints are informative for action analysis and the irrelevant joints often bring a lot of noise, we need to pay more attention to the informative ones. However, original LSTM does not have strong attention capability. Hence we propose a new class of LSTM network, Global Context-Aware Attention LSTM (GCA-LSTM), for 3D action recognition, which is able to selectively focus on the informative joints in the action sequence with the assistance of global contextual information. In order to achieve a reliable attention representation for the action sequence, we further propose a recurrent attention mechanism for our GCA-LSTM network, in which the attention performance is improved iteratively. Experiments show that our end-to-end network can reliably focus on the most informative joints in each frame of the skeleton sequence. Moreover, our network yields state-of-the-art performance on three challenging datasets for 3D action recognition.

Journal ArticleDOI
27 May 2016-Science
TL;DR: Diaryl dihydrophenazines are introduced, identified through computationally directed discovery, as a class of strongly reducing photoredox catalysts that achieve high initiator efficiencies through activation by visible light to synthesize polymers with tunable molecular weights and low dispersities.
Abstract: Atom transfer radical polymerization (ATRP) has become one of the most implemented methods for polymer synthesis, owing to impressive control over polymer composition and associated properties. However, contamination of the polymer by the metal catalyst remains a major limitation. Organic ATRP photoredox catalysts have been sought to address this difficult challenge but have not achieved the precision performance of metal catalysts. Here, we introduce diaryl dihydrophenazines, identified through computationally directed discovery, as a class of strongly reducing photoredox catalysts. These catalysts achieve high initiator efficiencies through activation by visible light to synthesize polymers with tunable molecular weights and low dispersities.

Journal ArticleDOI
TL;DR: It is demonstrated that agricultural intensification reduces network complexity and the abundance of keystone taxa in the root microbiome, and this is the first study to report mycorrhizal keystoneTaxa for agroecosystems.
Abstract: Root-associated microbes play a key role in plant performance and productivity, making them important players in agroecosystems. So far, very few studies have assessed the impact of different farming systems on the root microbiota and it is still unclear whether agricultural intensification influences the structure and complexity of microbial communities. We investigated the impact of conventional, no-till, and organic farming on wheat root fungal communities using PacBio SMRT sequencing on samples collected from 60 farmlands in Switzerland. Organic farming harbored a much more complex fungal network with significantly higher connectivity than conventional and no-till farming systems. The abundance of keystone taxa was the highest under organic farming where agricultural intensification was the lowest. We also found a strong negative association (R2 = 0.366; P < 0.0001) between agricultural intensification and root fungal network connectivity. The occurrence of keystone taxa was best explained by soil phosphorus levels, bulk density, pH, and mycorrhizal colonization. The majority of keystone taxa are known to form arbuscular mycorrhizal associations with plants and belong to the orders Glomerales, Paraglomerales, and Diversisporales. Supporting this, the abundance of mycorrhizal fungi in roots and soils was also significantly higher under organic farming. To our knowledge, this is the first study to report mycorrhizal keystone taxa for agroecosystems, and we demonstrate that agricultural intensification reduces network complexity and the abundance of keystone taxa in the root microbiome.

Journal ArticleDOI
TL;DR: In this paper, the authors study constraints imposed by two proposed string Swampland criteria on cosmology and find that inflationary models are generically in tension with these two criteria, and they argue that the universe will undergo a phase transition within a few Hubble times.

Journal ArticleDOI
TL;DR: Based on survival analysis, conventional RECIST might underestimate the benefit of pembrolizumab in approximately 15% of patients; modified criteria that permit treatment beyond initial progression per RECIST v1.1 might prevent premature cessation of treatment.
Abstract: PurposeWe evaluated atypical response patterns and the relationship between overall survival and best overall response measured per immune-related response criteria (irRC) and Response Evaluation Criteria in Solid Tumors, version 1.1 (RECIST v1.1) in patients with advanced melanoma treated with pembrolizumab in the phase Ib KEYNOTE-001 study (clinical trial information: NCT01295827).Patients and MethodsPatients received pembrolizumab 2 or 10 mg/kg every 2 weeks or every 3 weeks. Atypical responses were identified by using centrally assessed irRC data in patients with ≥ 28 weeks of imaging. Pseudoprogression was defined as ≥ 25% increase in tumor burden at week 12 (early) or any assessment after week 12 (delayed) that was not confirmed as progressive disease at next assessment. Response was assessed centrally per irRC and RECIST v1.1.ResultsOf the 655 patients with melanoma enrolled, 327 had ≥ 28 weeks of imaging follow-up. Twenty-four (7%) of these 327 patients had atypical responses (15 [5%] with early p...

Proceedings Article
03 May 2021
TL;DR: The Vision Transformer (ViT) as discussed by the authors uses a pure transformer applied directly to sequences of image patches to perform very well on image classification tasks, achieving state-of-the-art results on ImageNet, CIFAR-100, VTAB, etc.
Abstract: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.

Proceedings ArticleDOI
Yankai Lin1, Zhiyuan Liu1, Huanbo Luan1, Maosong Sun1, Siwei Rao, Song Liu 
01 Jun 2015
TL;DR: This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, it design a path-constraint resource allocation algorithm to measure the reliability of relation paths and (2) represents relation paths via semantic composition of relation embeddings.
Abstract: Representation learning of knowledge bases aims to embed both entities and relations into a low-dimensional space Most existing methods only consider direct relations in representation learning We argue that multiple-step relation paths also contain rich inference patterns between entities, and propose a path-based representation learning model This model considers relation paths as translations between entities for representation learning, and addresses two key challenges: (1) Since not all relation paths are reliable, we design a path-constraint resource allocation algorithm to measure the reliability of relation paths (2) We represent relation paths via semantic composition of relation embeddings Experimental results on real-world datasets show that, as compared with baselines, our model achieves significant and consistent improvements on knowledge base completion and relation extraction from text The source code of this paper can be obtained from https://githubcom/mrlyk423/ relation_extraction

Book
31 Mar 2015
TL;DR: This survey summarizes almost 50 years of research and development in the field of Augmented Reality AR and provides an overview of the common definitions of AR, and shows how AR fits into taxonomies of other related technologies.
Abstract: This survey summarizes almost 50 years of research and development in the field of Augmented Reality AR. From early research in the1960's until widespread availability by the 2010's there has been steady progress towards the goal of being able to seamlessly combine real and virtual worlds. We provide an overview of the common definitions of AR, and show how AR fits into taxonomies of other related technologies. A history of important milestones in Augmented Reality is followed by sections on the key enabling technologies of tracking, display and input devices. We also review design guidelines and provide some examples of successful AR applications. Finally, we conclude with a summary of directions for future work and a review of some of the areas that are currently being researched.

Journal ArticleDOI
Lindsey Bleem1, Lindsey Bleem2, B. Stalder3, T. de Haan4, K. A. Aird1, Steven W. Allen5, Steven W. Allen6, Douglas Applegate, Matthew L. N. Ashby3, Mark W. Bautz7, Matthew B. Bayliss3, Bradford Benson8, Bradford Benson1, Sebastian Bocquet9, Mark Brodwin10, John E. Carlstrom, C. L. Chang2, C. L. Chang1, I-Non Chiu9, Hsiao-Mei Cho11, Alejandro Clocchiatti12, T. M. Crawford1, A. T. Crites13, A. T. Crites1, Shantanu Desai9, J. P. Dietrich9, Matt Dobbs4, Matt Dobbs14, R. J. Foley3, R. J. Foley15, William R. Forman3, Elizabeth George16, Michael D. Gladders1, Anthony H. Gonzalez17, N. W. Halverson18, C. Hennig9, Henk Hoekstra19, Gilbert Holder4, W. L. Holzapfel20, J. D. Hrubes1, Christine Jones3, Ryan Keisler5, Ryan Keisler1, Lloyd Knox21, Adrian T. Lee20, Adrian T. Lee22, E. M. Leitch1, Jiayi Liu9, M. Lueker20, M. Lueker13, Daniel M. Luong-Van1, Adam Mantz, Daniel P. Marrone23, Michael McDonald7, Jeff McMahon24, S. S. Meyer1, L. M. Mocanu1, Joseph J. Mohr16, S. S. Murray3, Stephen Padin1, Stephen Padin13, C. Pryke25, Christian L. Reichardt20, Christian L. Reichardt26, Armin Rest27, Jonathan Ruel3, J. E. Ruhl28, Benjamin Saliwanchik28, A. Saro9, J. T. Sayre28, K. K. Schaffer29, K. K. Schaffer1, Tim Schrabback, Erik Shirokoff13, Erik Shirokoff20, Jizhou Song24, Jizhou Song30, Helmuth Spieler22, Spencer A. Stanford21, Spencer A. Stanford31, Z. K. Staniszewski28, Z. K. Staniszewski13, Antony A. Stark3, K. T. Story1, Christopher W. Stubbs3, K. Vanderlinde32, Joaquin Vieira15, Alexey Vikhlinin3, R. Williamson1, R. Williamson13, Oliver Zahn20, Oliver Zahn22, A. Zenteno9 
TL;DR: In this article, the authors presented a catalog of galaxy clusters selected via their Sunyaev-Zel'dovich (SZ) effect signature from 2500 deg^2 of South Pole Telescope (SPT) data.
Abstract: We present a catalog of galaxy clusters selected via their Sunyaev-Zel'dovich (SZ) effect signature from 2500 deg^2 of South Pole Telescope (SPT) data. This work represents the complete sample of clusters detected at high significance in the 2500 deg^2 SPT-SZ survey, which was completed in 2011. A total of 677 (409) cluster candidates are identified above a signal-to-noise threshold of ξ = 4.5 (5.0). Ground- and space-based optical and near-infrared (NIR) imaging confirms overdensities of similarly colored galaxies in the direction of 516 (or 76%) of the ξ > 4.5 candidates and 387 (or 95%) of the ξ > 5 candidates; the measured purity is consistent with expectations from simulations. Of these confirmed clusters, 415 were first identified in SPT data, including 251 new discoveries reported in this work. We estimate photometric redshifts for all candidates with identified optical and/or NIR counterparts; we additionally report redshifts derived from spectroscopic observations for 141 of these systems. The mass threshold of the catalog is roughly independent of redshift above z ~ 0.25 leading to a sample of massive clusters that extends to high redshift. The median mass of the sample is M_(500c(ρcrit)) ~ 3.5 x 10^(14)M_☉ h_(70)^(-1), the median redshift is z_(med) = 0.55, and the highest-redshift systems are at z > 1.4. The combination of large redshift extent, clean selection, and high typical mass makes this cluster sample of particular interest for cosmological analyses and studies of cluster formation and evolution.

Journal ArticleDOI
TL;DR: It is argued that, by consecutively undertaking medication tapering followed by a longer washout period before starting MBCT, even stronger effects of MBCt might be observed, and the proposed change from OPV to IPV might lead to increased all-cause mortality.

Journal ArticleDOI
TL;DR: In this paper, the authors measured the distribution of stars in the [/Fe] versus [Fe/H] plane and the metallicity distribution functions (MDFs) across an unprecedented volume of the Milky Way disk, with radius 3 < R < 15 kpc and height kpc.
Abstract: Using a sample of 69,919 red giants from the SDSS-III/APOGEE Data Release 12, we measure the distribution of stars in the [/Fe] versus [Fe/H] plane and the metallicity distribution functions (MDFs) across an unprecedented volume of the Milky Way disk, with radius 3 < R < 15 kpc and height kpc. Stars in the inner disk (R < 5 kpc) lie along a single track in [/Fe] versus [Fe/H], starting with -enhanced, metal-poor stars and ending at [/Fe] ∼ 0 and [Fe/H] ∼ +0.4. At larger radii we find two distinct sequences in [/Fe] versus [Fe/H] space, with a roughly solar- sequence that spans a decade in metallicity and a high- sequence that merges with the low- sequence at super-solar [Fe/H]. The location of the high- sequence is nearly constant across the disk.