scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: This work recognizes three waves of mycorrhizal evolution, starting with AM in early land plants, continuing in the Cretaceous with multiple new NM or EcM linages, ericoid and orchid mycor rhizas, which has resulted in root complexity linked to rapid plant diversification in biodiversity hotspots.
Abstract: Contents Summary 1108 I. Introduction 1108 II. Mycorrhizal plant diversity at global and local scales 1108 III. Mycorrhizal evolution in plants: a brief update 1111 IV. Conclusions and perspectives 1114 References 1114 SUMMARY: The majority of vascular plants are mycorrhizal: 72% are arbuscular mycorrhizal (AM), 2.0% are ectomycorrhizal (EcM), 1.5% are ericoid mycorrhizal and 10% are orchid mycorrhizal. Just 8% are completely nonmycorrhizal (NM), whereas 7% have inconsistent NM-AM associations. Most NM and NM-AM plants are nutritional specialists (e.g. carnivores and parasites) or habitat specialists (e.g. hydrophytes and epiphytes). Mycorrhizal associations are consistent in most families, but there are exceptions with complex roots (e.g. both EcM and AM). We recognize three waves of mycorrhizal evolution, starting with AM in early land plants, continuing in the Cretaceous with multiple new NM or EcM linages, ericoid and orchid mycorrhizas. The third wave, which is recent and ongoing, has resulted in root complexity linked to rapid plant diversification in biodiversity hotspots.

737 citations


Journal ArticleDOI
TL;DR: Early administration of dexamethasone could reduce duration of mechanical ventilation and overall mortality in patients with established moderate-to-severe ARDS.

737 citations


Journal ArticleDOI
TL;DR: In this paper, the mass and equatorial radius of the millisecond pulsar PSR J0030+0451 were estimated based on a relativistic ray-tracing of thermal emission from hot regions of the pulsar surface.
Abstract: We report on Bayesian parameter estimation of the mass and equatorial radius of the millisecond pulsar PSR J0030+0451, conditional on pulse-profile modeling of Neutron Star Interior Composition Explorer X-ray spectral-timing event data. We perform relativistic ray-tracing of thermal emission from hot regions of the pulsar’s surface. We assume two distinct hot regions based on two clear pulsed components in the phase-folded pulse-profile data; we explore a number of forms (morphologies and topologies) for each hot region, inferring their parameters in addition to the stellar mass and radius. For the family of models considered, the evidence (prior predictive probability of the data) strongly favors a model that permits both hot regions to be located in the same rotational hemisphere. Models wherein both hot regions are assumed to be simply connected circular single-temperature spots, in particular those where the spots are assumed to be reflection-symmetric with respect to the stellar origin, are strongly disfavored. For the inferred configuration, one hot region subtends an angular extent of only a few degrees (in spherical coordinates with origin at the stellar center) and we are insensitive to other structural details; the second hot region is far more azimuthally extended in the form of a narrow arc, thus requiring a larger number of parameters to describe. The inferred mass M and equatorial radius R eq are, respectively, and , while the compactness is more tightly constrained; the credible interval bounds reported here are approximately the 16% and 84% quantiles in marginal posterior mass.

737 citations


Journal ArticleDOI
10 Feb 2017-Science
TL;DR: MINFLUX as discussed by the authors is a concept for localizing photon emitters in space by probing the emitter with a local intensity minimum of excitation light, which minimizes the fluorescence photons needed for high localization precision.
Abstract: We introduce MINFLUX, a concept for localizing photon emitters in space. By probing the emitter with a local intensity minimum of excitation light, MINFLUX minimizes the fluorescence photons needed for high localization precision. In our experiments, 22 times fewer fluorescence photons are required as compared to popular centroid localization. In superresolution microscopy, MINFLUX attained ~1-nm precision, resolving molecules only 6 nanometers apart. MINFLUX tracking of single fluorescent proteins increased the temporal resolution and the number of localizations per trace by a factor of 100, as demonstrated with diffusing 30S ribosomal subunits in living Escherichia coli As conceptual limits have not been reached, we expect this localization modality to break new ground for observing the dynamics, distribution, and structure of macromolecules in living cells and beyond.

737 citations


Journal ArticleDOI
TL;DR: A carefully chosen metal-organic framework (MOF) material featuring high porosity and exceptional chemical stability that is extraordinarily effective for the degradation of nerve agents and their simulants is reported.
Abstract: Chemical warfare agents containing phosphonate ester bonds are among the most toxic chemicals known to mankind. Recent global military events, such as the conflict and disarmament in Syria, have brought into focus the need to find effective strategies for the rapid destruction of these banned chemicals. Solutions are needed for immediate personal protection (for example, the filtration and catalytic destruction of airborne versions of agents), bulk destruction of chemical weapon stockpiles, protection (via coating) of clothing, equipment and buildings, and containment of agent spills. Solid heterogeneous materials such as modified activated carbon or metal oxides exhibit many desirable characteristics for the destruction of chemical warfare agents. However, low sorptive capacities, low effective active site loadings, deactivation of the active site, slow degradation kinetics, and/or a lack of tailorability offer significant room for improvement in these materials. Here, we report a carefully chosen metal-organic framework (MOF) material featuring high porosity and exceptional chemical stability that is extraordinarily effective for the degradation of nerve agents and their simulants. Experimental and computational evidence points to Lewis-acidic Zr(IV) ions as the active sites and to their superb accessibility as a defining element of their efficacy.

737 citations


Posted Content
TL;DR: A powerful AGW baseline is designed, achieving state-of-the-art or at least comparable performance on twelve datasets for four different Re-ID tasks, and a new evaluation metric (mINP) is introduced, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re- ID system for real applications.
Abstract: Person re-identification (Re-ID) aims at retrieving a person of interest across multiple non-overlapping cameras. With the advancement of deep neural networks and increasing demand of intelligent video surveillance, it has gained significantly increased interest in the computer vision community. By dissecting the involved components in developing a person Re-ID system, we categorize it into the closed-world and open-world settings. The widely studied closed-world setting is usually applied under various research-oriented assumptions, and has achieved inspiring success using deep learning techniques on a number of datasets. We first conduct a comprehensive overview with in-depth analysis for closed-world person Re-ID from three different perspectives, including deep feature representation learning, deep metric learning and ranking optimization. With the performance saturation under closed-world setting, the research focus for person Re-ID has recently shifted to the open-world setting, facing more challenging issues. This setting is closer to practical applications under specific scenarios. We summarize the open-world Re-ID in terms of five different aspects. By analyzing the advantages of existing methods, we design a powerful AGW baseline, achieving state-of-the-art or at least comparable performance on twelve datasets for FOUR different Re-ID tasks. Meanwhile, we introduce a new evaluation metric (mINP) for person Re-ID, indicating the cost for finding all the correct matches, which provides an additional criteria to evaluate the Re-ID system for real applications. Finally, some important yet under-investigated open issues are discussed.

737 citations


Journal ArticleDOI
TL;DR: This paper presents the first globally optimal algorithm, named Go-ICP, for Euclidean (rigid) registration of two 3D point-sets under the inline-formula notation, and derives novel upper and lower bounds for the registration error function.
Abstract: The Iterative Closest Point (ICP) algorithm is one of the most widely used methods for point-set registration. However, being based on local iterative optimization, ICP is known to be susceptible to local minima. Its performance critically relies on the quality of the initialization and only local optimality is guaranteed. This paper presents the first globally optimal algorithm, named Go-ICP, for Euclidean (rigid) registration of two 3D point-sets under the $L_2$ error metric defined in ICP. The Go-ICP method is based on a branch-and-bound scheme that searches the entire 3D motion space $SE(3)$ . By exploiting the special structure of $SE(3)$ geometry, we derive novel upper and lower bounds for the registration error function. Local ICP is integrated into the BnB scheme, which speeds up the new method while guaranteeing global optimality. We also discuss extensions, addressing the issue of outlier robustness. The evaluation demonstrates that the proposed method is able to produce reliable registration results regardless of the initialization. Go-ICP can be applied in scenarios where an optimal solution is desirable or where a good initialization is not always available.

736 citations


Proceedings ArticleDOI
07 Jan 2019
TL;DR: A novel Trident Network (TridentNet) aiming to generate scale-specific feature maps with a uniform representational power is proposed and a parallel multi-branch architecture in which each branch shares the same transformation parameters but with different receptive fields is constructed.
Abstract: Scale variation is one of the key challenges in object detection. In this work, we first present a controlled experiment to investigate the effect of receptive fields for scale variation in object detection. Based on the findings from the exploration experiments, we propose a novel Trident Network (TridentNet) aiming to generate scale-specific feature maps with a uniform representational power. We construct a parallel multi-branch architecture in which each branch shares the same transformation parameters but with different receptive fields. Then, we adopt a scale-aware training scheme to specialize each branch by sampling object instances of proper scales for training. As a bonus, a fast approximation version of TridentNet could achieve significant improvements without any additional parameters and computational cost compared with the vanilla detector. On the COCO dataset, our TridentNet with ResNet-101 backbone achieves state-of-the-art single-model results of 48.4 mAP. Codes are available at https://git.io/fj5vR.

736 citations



Journal ArticleDOI
TL;DR: Atherosclerosis occurs in the subendothelial space (intima) of medium-sized arteries at regions of disturbed blood flow and is triggered by an interplay between endothelial dysfunction and sub-endothel lipoprotein retention as mentioned in this paper.
Abstract: Atherosclerosis occurs in the subendothelial space (intima) of medium-sized arteries at regions of disturbed blood flow and is triggered by an interplay between endothelial dysfunction and subendothelial lipoprotein retention. Over time, this process stimulates a nonresolving inflammatory response that can cause intimal destruction, arterial thrombosis, and end-organ ischemia. Recent advances highlight important cell biological atherogenic processes, including mechanotransduction and inflammatory processes in endothelial cells, origins and contributions of lesional macrophages, and origins and phenotypic switching of lesional smooth muscle cells. These advances illustrate how in-depth mechanistic knowledge of the cellular pathobiology of atherosclerosis can lead to new ideas for therapy.

736 citations


Journal ArticleDOI
TL;DR: The characteristics of lncRNAs, including their roles, functions, and working mechanisms are summarized, methods for identifying and annotating lnc RNAs are described, and future opportunities for lncRNA-based therapies using antisense oligonucleotides are discussed.

Journal ArticleDOI
TL;DR: A survey including hyperspectral sensors, inherent data processing and applications focusing both on agriculture and forestry—wherein the combination of UAV and hyperspectrals plays a center role—is presented in this paper.
Abstract: Traditional imagery—provided, for example, by RGB and/or NIR sensors—has proven to be useful in many agroforestry applications. However, it lacks the spectral range and precision to profile materials and organisms that only hyperspectral sensors can provide. This kind of high-resolution spectroscopy was firstly used in satellites and later in manned aircraft, which are significantly expensive platforms and extremely restrictive due to availability limitations and/or complex logistics. More recently, UAS have emerged as a very popular and cost-effective remote sensing technology, composed of aerial platforms capable of carrying small-sized and lightweight sensors. Meanwhile, hyperspectral technology developments have been consistently resulting in smaller and lighter sensors that can currently be integrated in UAS for either scientific or commercial purposes. The hyperspectral sensors’ ability for measuring hundreds of bands raises complexity when considering the sheer quantity of acquired data, whose usefulness depends on both calibration and corrective tasks occurring in pre- and post-flight stages. Further steps regarding hyperspectral data processing must be performed towards the retrieval of relevant information, which provides the true benefits for assertive interventions in agricultural crops and forested areas. Considering the aforementioned topics and the goal of providing a global view focused on hyperspectral-based remote sensing supported by UAV platforms, a survey including hyperspectral sensors, inherent data processing and applications focusing both on agriculture and forestry—wherein the combination of UAV and hyperspectral sensors plays a center role—is presented in this paper. Firstly, the advantages of hyperspectral data over RGB imagery and multispectral data are highlighted. Then, hyperspectral acquisition devices are addressed, including sensor types, acquisition modes and UAV-compatible sensors that can be used for both research and commercial purposes. Pre-flight operations and post-flight pre-processing are pointed out as necessary to ensure the usefulness of hyperspectral data for further processing towards the retrieval of conclusive information. With the goal of simplifying hyperspectral data processing—by isolating the common user from the processes’ mathematical complexity—several available toolboxes that allow a direct access to level-one hyperspectral data are presented. Moreover, research works focusing the symbiosis between UAV-hyperspectral for agriculture and forestry applications are reviewed, just before the paper’s conclusions.

Journal ArticleDOI
TL;DR: Three new measures: the Acceptability of Intervention Measure (AIM), Intervention Appropriateness Measure (IAM), and Feasibility of intervention Measure (FIM) are developed and psychometrically assessed and demonstrate promising psychometric properties.
Abstract: Implementation outcome measures are essential for monitoring and evaluating the success of implementation efforts. Yet, currently available measures lack conceptual clarity and have largely unknown reliability and validity. This study developed and psychometrically assessed three new measures: the Acceptability of Intervention Measure (AIM), Intervention Appropriateness Measure (IAM), and Feasibility of Intervention Measure (FIM). Thirty-six implementation scientists and 27 mental health professionals assigned 31 items to the constructs and rated their confidence in their assignments. The Wilcoxon one-sample signed rank test was used to assess substantive and discriminant content validity. Exploratory and confirmatory factor analysis (EFA and CFA) and Cronbach alphas were used to assess the validity of the conceptual model. Three hundred twenty-six mental health counselors read one of six randomly assigned vignettes depicting a therapist contemplating adopting an evidence-based practice (EBP). Participants used 15 items to rate the therapist’s perceptions of the acceptability, appropriateness, and feasibility of adopting the EBP. CFA and Cronbach alphas were used to refine the scales, assess structural validity, and assess reliability. Analysis of variance (ANOVA) was used to assess known-groups validity. Finally, half of the counselors were randomly assigned to receive the same vignette and the other half the opposite vignette; and all were asked to re-rate acceptability, appropriateness, and feasibility. Pearson correlation coefficients were used to assess test-retest reliability and linear regression to assess sensitivity to change. All but five items exhibited substantive and discriminant content validity. A trimmed CFA with five items per construct exhibited acceptable model fit (CFI = 0.98, RMSEA = 0.08) and high factor loadings (0.79 to 0.94). The alphas for 5-item scales were between 0.87 and 0.89. Scale refinement based on measure-specific CFAs and Cronbach alphas using vignette data produced 4-item scales (α’s from 0.85 to 0.91). A three-factor CFA exhibited acceptable fit (CFI = 0.96, RMSEA = 0.08) and high factor loadings (0.75 to 0.89), indicating structural validity. ANOVA showed significant main effects, indicating known-groups validity. Test-retest reliability coefficients ranged from 0.73 to 0.88. Regression analysis indicated each measure was sensitive to change in both directions. The AIM, IAM, and FIM demonstrate promising psychometric properties. Predictive validity assessment is planned.

Proceedings Article
28 Mar 2019
TL;DR: This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.
Abstract: In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize.


Journal ArticleDOI
TL;DR: This work provides a detailed study of BLE fingerprinting using 19 beacons distributed around a ~600 m2 testbed to position a consumer device, and investigates the choice of key parameters in a BLE positioning system, including beacon density, transmit power, and transmit frequency.
Abstract: The complexity of indoor radio propagation has resulted in location-awareness being derived from empirical fingerprinting techniques, where positioning is performed via a previously-constructed radio map, usually of WiFi signals. The recent introduction of the Bluetooth Low Energy (BLE) radio protocol provides new opportunities for indoor location. It supports portable battery-powered beacons that can be easily distributed at low cost, giving it distinct advantages over WiFi. However, its differing use of the radio band brings new challenges too. In this work, we provide a detailed study of BLE fingerprinting using 19 beacons distributed around a $\sim\! 600\ \mbox{m}^2$ testbed to position a consumer device. We demonstrate the high susceptibility of BLE to fast fading, show how to mitigate this, and quantify the true power cost of continuous BLE scanning. We further investigate the choice of key parameters in a BLE positioning system, including beacon density, transmit power, and transmit frequency. We also provide quantitative comparison with WiFi fingerprinting. Our results show advantages to the use of BLE beacons for positioning. For one-shot (push-to-fix) positioning we achieve $30\ \mbox{m}^2$ ), compared to $100\ \mbox{m}^2$ ) and < 8.5 m for an established WiFi network in the same area.

Proceedings ArticleDOI
01 Jun 2016
TL;DR: A novel CNN based tracking framework is proposed, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one.
Abstract: In recent years, several methods have been developed to utilize hierarchical features learned from a deep convolutional neural network (CNN) for visual tracking. However, as features from a certain CNN layer characterize an object of interest from only one aspect or one level, the performance of such trackers trained with features from one layer (usually the second to last layer) can be further improved. In this paper, we propose a novel CNN based tracking framework, which takes full advantage of features from different CNN layers and uses an adaptive Hedge method to hedge several CNN based trackers into a single stronger one. Extensive experiments on a benchmark dataset of 100 challenging image sequences demonstrate the effectiveness of the proposed algorithm compared to several state-of-theart trackers.

Proceedings ArticleDOI
Yubo Chen1, Liheng Xu1, Kang Liu1, Daojian Zeng1, Jun Zhao1 
01 Jul 2015
TL;DR: A word-representation model to capture meaningful semantic regularities for words and a framework based on a convolutional neural network to capture sentence-level clues are introduced.
Abstract: Traditional approaches to the task of ACE event extraction primarily rely on elaborately designed features and complicated natural language processing (NLP) tools. These traditional approaches lack generalization, take a large amount of human effort and are prone to error propagation and data sparsity problems. This paper proposes a novel event-extraction method, which aims to automatically extract lexical-level and sentence-level features without using complicated NLP tools. We introduce a word-representation model to capture meaningful semantic regularities for words and adopt a framework based on a convolutional neural network (CNN) to capture sentence-level clues. However, CNN can only capture the most important information in a sentence and may miss valuable facts when considering multiple-event sentences. We propose a dynamic multi-pooling convolutional neural network (DMCNN), which uses a dynamic multi-pooling layer according to event triggers and arguments, to reserve more crucial information. The experimental results show that our approach significantly outperforms other state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new dataset, called Bot-IoT, which incorporates legitimate and simulated IoT network traffic, along with various types of attacks, and evaluated the reliability of the dataset using different statistical and machine learning methods for forensics purposes.

Journal ArticleDOI
TL;DR: The authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types to illustrate how common clinical problems are being addressed.
Abstract: Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care.

Journal ArticleDOI
TL;DR: This editorial aims to propose clear definitions of each of these terms such as microbiome, microbiota, metabolomic, and meetagenome and metagenomics among others, and to implore scientists in the field to adopt and perfect them.
Abstract: The advancement of DNA/RNA, proteins, and metabolite analytical platforms, combined with increased computing technologies, has transformed the field of microbial community analysis. This transformation is evident by the exponential increase in the number of publications describing the composition and structure, and sometimes function, of the microbial communities inhabiting the human body. This rapid evolution of the field has been accompanied by confusion in the vocabulary used to describe different aspects of these communities and their environments. The misuse of terms such as microbiome, microbiota, metabolomic, and metagenome and metagenomics among others has contributed to misunderstanding of many study results by the scientific community and the general public alike. A few review articles have previously defined those terms, but mainly as sidebars, and no clear definitions or use cases have been published. In this editorial, we aim to propose clear definitions of each of these terms, which we would implore scientists in the field to adopt and perfect.

Journal ArticleDOI
TL;DR: The authors study how firms differ from their competitors using new time-varying measures of product similarity based on text-based analysis of firm 10-K product descriptions and find evidence that firm R&D and advertising are associated with subsequent differentiation from competitors.
Abstract: We study how firms differ from their competitors using new time-varying measures of product similarity based on text-based analysis of firm 10-K product descriptions. This year-by-year set of product similarity measures allows us to generate a new set of industries in which firms can have their own distinct set of competitors. Our new sets of competitors explain specific discussion of high competition, rivals identified by managers as peer firms, and changes to industry competitors following exogenous industry shocks. We also find evidence that firm R&D and advertising are associated with subsequent differentiation from competitors, consistent with theories of endogenous product differentiation.

Journal ArticleDOI
TL;DR: In this article, the effects of Fe incorporation on structure-activity relationships in Ni(oxy)hydroxide were investigated using electrochemical, in situ Raman, X-ray photoelectron spectroscopy, and electrochemical quartz crystal microbalance measurements.
Abstract: Ni-(oxy)hydroxide-based materials are promising earth-abundant catalysts for electrochemical water oxidation in basic media. Recent findings demonstrate that incorporation of trace Fe impurities from commonly used KOH electrolytes significantly improves oxygen evolution reaction (OER) activity over NiOOH electrocatalysts. Because nearly all previous studies detailing structural differences between α-Ni(OH)2/γ-NiOOH and β-Ni(OH)2/β-NiOOH were completed in unpurified electrolytes, it is unclear whether these structural changes are unique to the aging phase transition in the Ni-(oxy)hydroxide matrix or if they arise fully or in part from inadvertent Fe incorporation. Here, we report an investigation of the effects of Fe incorporation on structure–activity relationships in Ni-(oxy)hydroxide. Electrochemical, in situ Raman, X-ray photoelectron spectroscopy, and electrochemical quartz crystal microbalance measurements were employed to investigate Ni(OH)2 thin films aged in Fe-free and unpurified (reagent-grade)...

Journal ArticleDOI
TL;DR: To estimate medical expenditures attributable to older adult falls using a methodology that can be updated annually to track these expenditures over time, a database of hospital admissions and accident and emergency department visits is constructed.
Abstract: Objectives To estimate medical expenditures attributable to older adult falls using a methodology that can be updated annually to track these expenditures over time. Design Population data from the National Vital Statistics System (NVSS) and cost estimates from the Web-based Injury Statistics Query and Reporting System (WISQARS) for fatal falls, quasi-experimental regression analysis of data from the Medicare Current Beneficiaries Survey (MCBS) for nonfatal falls. Setting U.S. population aged 65 and older during 2015. Participants Fatal falls from the 2015 NVSS (N=28,486); respondents to the 2011 MCBS (N=3,460). Measurements Total spending attributable to older adult falls in the United States in 2015, in dollars. Results In 2015, the estimated medical costs attributable to fatal and nonfatal falls was approximately $50.0 billion. For nonfatal falls, Medicare paid approximately $28.9 billion, Medicaid $8.7 billion, and private and other payers $12.0 billion. Overall medical spending for fatal falls was estimated to be $754 million. Conclusion Older adult falls result in substantial medical costs. Measuring medical costs attributable to falls will provide vital information about the magnitude of the problem and the potential financial effect of effective prevention strategies.

Posted Content
TL;DR: This paper proposes a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features and test the proposed method on standard tasks from the realms of image-, graph-and 3D shape analysis and show that it consistently outperforms previous approaches.
Abstract: Deep learning has achieved a remarkable performance breakthrough in several fields, most notably in speech recognition, natural language processing, and computer vision. In particular, convolutional neural network (CNN) architectures currently produce state-of-the-art performance on a variety of image analysis tasks such as object detection and recognition. Most of deep learning research has so far focused on dealing with 1D, 2D, or 3D Euclidean-structured data such as acoustic signals, images, or videos. Recently, there has been an increasing interest in geometric deep learning, attempting to generalize deep learning methods to non-Euclidean structured data such as graphs and manifolds, with a variety of applications from the domains of network analysis, computational social science, or computer graphics. In this paper, we propose a unified framework allowing to generalize CNN architectures to non-Euclidean domains (graphs and manifolds) and learn local, stationary, and compositional task-specific features. We show that various non-Euclidean CNN methods previously proposed in the literature can be considered as particular instances of our framework. We test the proposed method on standard tasks from the realms of image-, graph- and 3D shape analysis and show that it consistently outperforms previous approaches.

Proceedings Article
06 Aug 2017
TL;DR: A new neural generative model is proposed which combines variational auto-encoders and holistic attribute discriminators for effective imposition of semantic structures inGeneric generation and manipulation of text.
Abstract: Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible text sentences, whose attributes are controlled by learning disentangled latent representations with designated semantics. We propose a new neural generative model which combines variational auto-encoders (VAEs) and holistic attribute discriminators for effective imposition of semantic structures. The model can alternatively be seen as enhancing VAEs with the wake-sleep algorithm for leveraging fake samples as extra training data. With differentiable approximation to discrete text samples, explicit constraints on independent attribute controls, and efficient collaborative learning of generator and discriminators, our model learns interpretable representations from even only word annotations, and produces short sentences with desired attributes of sentiment and tenses. Quantitative experiments using trained classifiers as evaluators validate the accuracy of sentence and attribute generation.

Journal ArticleDOI
TL;DR: This review describes the enzymatic pathways involved in the degradation of purines, getting into their structure and biochemistry until the uric acid formation.

Journal ArticleDOI
TL;DR: In patients with glucocorticoid‐dependent severe asthma, dupilumab treatment reduced oral glucoc Corticoid use while decreasing the rate of severe exacerbations and increasing the FEV1.
Abstract: Background Dupilumab is a fully human anti–interleukin-4 receptor α monoclonal antibody that blocks both interleukin-4 and interleukin-13 signaling. Its effectiveness in reducing oral glucocorticoid use in patients with severe asthma while maintaining asthma control is unknown. Methods We randomly assigned 210 patients with oral glucocorticoid–treated asthma to receive add-on dupilumab (at a dose of 300 mg) or placebo every 2 weeks for 24 weeks. After a glucocorticoid dose-adjustment period before randomization, glucocorticoid doses were adjusted in a downward trend from week 4 to week 20 and then maintained at a stable dose for 4 weeks. The primary end point was the percentage reduction in the glucocorticoid dose at week 24. Key secondary end points were the proportion of patients at week 24 with a reduction of at least 50% in the glucocorticoid dose and the proportion of patients with a reduction to a glucocorticoid dose of less than 5 mg per day. Severe exacerbation rates and the forced expira...

Journal ArticleDOI
TL;DR: The authors examined the Climate Club as a model for international climate policy and found that without sanctions against non-participants there are no stable coalitions other than those with minimal abatement.
Abstract: Notwithstanding great progress in scientific and economic under standing of climate change, it has proven difficult to forge inter national agreements because of free-riding, as seen in the defunct Kyoto Protocol. This study examines the club as a model for international climate policy. Based on economic theory and empirical modeling, it finds that without sanctions against non-participants there are no stable coalitions other than those with minimal abatement. By contrast, a regime with small trade penalties on non-participants, a Climate Club, can induce a large stable coalition with high levels of abatement. (JEL Q54, Q58, K32, K33)

Journal ArticleDOI
TL;DR: The reserve, resilience, and protective factors professional interest area established a whitepaper workgroup to develop consensus definitions for cognitive reserve, brain reserve, and brain maintenance and evaluated measures that have been used to implement these concepts in research settings and developed guidelines for research that explores or utilizes these concepts.
Abstract: Several concepts, which in the aggregate get might be used to account for "resilience" against age- and disease-related changes, have been the subject of much research. These include brain reserve, cognitive reserve, and brain maintenance. However, different investigators have use these terms in different ways, and there has never been an attempt to arrive at consensus on the definition of these concepts. Furthermore, there has been confusion regarding the measurement of these constructs and the appropriate ways to apply them to research. Therefore the reserve, resilience, and protective factors professional interest area, established under the auspices of the Alzheimer's Association, established a whitepaper workgroup to develop consensus definitions for cognitive reserve, brain reserve, and brain maintenance. The workgroup also evaluated measures that have been used to implement these concepts in research settings and developed guidelines for research that explores or utilizes these concepts. The workgroup hopes that this whitepaper will form a reference point for researchers in this area and facilitate research by supplying a common language.