scispace - formally typeset
Search or ask a question
Browse all papers

Journal ArticleDOI
TL;DR: The task force recommends an early imaging test in patients with suspected LVV, with ultrasound and MRI being the first choices in GCA and TAK, respectively, which are the first EULAR recommendations providing up-to-date guidance for the role of imaging in the diagnosis and monitoring of patients with (suspected) LVV.
Abstract: To develop evidence-based recommendations for the use of imaging modalities in primary large vessel vasculitis (LVV) including giant cell arteritis (GCA) and Takayasu arteritis (TAK). European League Against Rheumatism (EULAR) standardised operating procedures were followed. A systematic literature review was conducted to retrieve data on the role of imaging modalities including ultrasound, MRI, CT and [18F]-fluorodeoxyglucose positron emission tomography (PET) in LVV. Based on evidence and expert opinion, the task force consisting of 20 physicians, healthcare professionals and patients from 10 EULAR countries developed recommendations, with consensus obtained through voting. The final level of agreement was voted anonymously. A total of 12 recommendations have been formulated. The task force recommends an early imaging test in patients with suspected LVV, with ultrasound and MRI being the first choices in GCA and TAK, respectively. CT or PET may be used alternatively. In case the diagnosis is still in question after clinical examination and imaging, additional investigations including temporal artery biopsy and/or additional imaging are required. In patients with a suspected flare, imaging might help to better assess disease activity. The frequency and choice of imaging modalities for long-term monitoring of structural damage remains an individual decision; close monitoring for aortic aneurysms should be conducted in patients at risk for this complication. All imaging should be performed by a trained specialist using appropriate operational procedures and settings. These are the first EULAR recommendations providing up-to-date guidance for the role of imaging in the diagnosis and monitoring of patients with (suspected) LVV.

669 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the extent to which quantile mapping algorithms modify global climate model (GCM) trends in mean precipitation and precipitation extremes indices, and proposed a bias correction algorithm, quantile delta mapping (QDM), that explicitly preserves relative changes in precipitation quantiles.
Abstract: Quantile mapping bias correction algorithms are commonly used to correct systematic distributional biases in precipitation outputs from climate models. Although they are effective at removing historical biases relative to observations, it has been found that quantile mapping can artificially corrupt future model-projected trends. Previous studies on the modification of precipitation trends by quantile mapping have focused on mean quantities, with less attention paid to extremes. This article investigates the extent to which quantile mapping algorithms modify global climate model (GCM) trends in mean precipitation and precipitation extremes indices. First, a bias correction algorithm, quantile delta mapping (QDM), that explicitly preserves relative changes in precipitation quantiles is presented. QDM is compared on synthetic data with detrended quantile mapping (DQM), which is designed to preserve trends in the mean, and with standard quantile mapping (QM). Next, methods are applied to phase 5 of t...

669 citations


Journal ArticleDOI
TL;DR: Recommendations to present best-practice recommendations for the prevention, recognition, and treatment of exertional heat illnesses (EHIs) and to describe the relevant physiology of thermoregulation.
Abstract: Objective: To present best-practice recommendations for the prevention, recognition, and treatment of exertional heat illnesses (EHIs) and to describe the relevant physiology of thermoregulation. ...

669 citations


Journal ArticleDOI
TL;DR: Alberto Martin-Martin is funded for a four-year doctoral fellowship by the Ministerio de Educacion, Cultura, y Deportes (Spain) and an international mobility grant from Universidad de Granada and CEI BioTic Granadafunded a research stay at the University of Wolverhampton.
Abstract: Despite citation counts from Google Scholar (GS), Web of Science (WoS), and Scopus being widely consulted by researchers and sometimes used in research evaluations, there is no recent or systematic evidence about the differences between them. In response, this paper investigates 2,448,055 citations to 2,299 English-language highly-cited documents from 252 GS subject categories published in 2006, comparing GS, the WoS Core Collection, and Scopus. GS consistently found the largest percentage of citations across all areas (93%-96%), far ahead of Scopus (35%-77%) and WoS (27%-73%). GS found nearly all the WoS (95%) and Scopus (92%) citations. Most citations found only by GS were from non-journal sources (48%-65%), including theses, books, conference papers, and unpublished materials. Many were non-English (19%-38%), and they tended to be much less cited than citing sources that were also in Scopus or WoS. Despite the many unique GS citing sources, Spearman correlations between citation counts in GS and WoS or Scopus are high (0.78-0.99). They are lower in the Humanities, and lower between GS and WoS than between GS and Scopus. The results suggest that in all areas GS citation data is essentially a superset of WoS and Scopus, with substantial extra coverage.

669 citations


Journal ArticleDOI
TL;DR: Antibodies to PD-1 and PD-L1 have shown potential clinical benefit in advanced solid tumors and drugs and additional immune checkpoint inhibitors are currently under investigation in multiple clinical trials as single-agent therapy and also in combination with other agents.
Abstract: Emerging cancer therapeutics target the immune system, stimulating host antitumor response. Tumor cells generate an immunosuppressive milieu with multiple mechanisms to evade immune destruction, including disruption of effective antigen presentation, reduction of effector T-cell function, and upregulation of pathways that promote tolerance and T-cell anergy.1 The programmed death (PD) -1/PD ligand-1 (PD-L1) pathway is a critical component of tumor-mediated immunosuppression. Antibodies to PD-1 and PD-L1 have shown potential clinical benefit in advanced solid tumors.2 The US Food and Drug Administration approved the PD-1 inhibitors pembrolizumab and nivolumab for metastatic melanoma and also recently approved nivolumab for the treatment of metastatic squamous non–small-cell lung cancer. The US Food and Drug Administration has also designated the PD-L1 inhibitor MPDL3280A as a breakthrough therapy for bladder cancer and non–small-cell lung cancer. These drugs and additional immune checkpoint inhibitors are currently under investigation in multiple clinical trials as single-agent therapy and also in combination with other agents.

669 citations


Book ChapterDOI
TL;DR: In this paper, the challenges in fog computing acting as an intermediate layer between IoT devices/sensors and cloud datacentres and review the current developments in this field are discussed.
Abstract: In recent years, the number of Internet of Things (IoT) devices/sensors has increased to a great extent. To support the computational demand of real-time latency-sensitive applications of largely geo-distributed IoT devices/sensors, a new computing paradigm named "Fog computing" has been introduced. Generally, Fog computing resides closer to the IoT devices/sensors and extends the Cloud-based computing, storage and networking facilities. In this chapter, we comprehensively analyse the challenges in Fogs acting as an intermediate layer between IoT devices/ sensors and Cloud datacentres and review the current developments in this field. We present a taxonomy of Fog computing according to the identified challenges and its key features.We also map the existing works to the taxonomy in order to identify current research gaps in the area of Fog computing. Moreover, based on the observations, we propose future directions for research.

669 citations


Posted Content
TL;DR: This work presents an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches, and approximate the model's intractable posterior with Bernoulli variational distributions.
Abstract: Convolutional neural networks (CNNs) work well on large datasets. But labelled data is hard to collect, and in some applications larger amounts of data are not available. The problem then is how to use CNNs with small data -- as CNNs overfit quickly. We present an efficient Bayesian CNN, offering better robustness to over-fitting on small data than traditional approaches. This is by placing a probability distribution over the CNN's kernels. We approximate our model's intractable posterior with Bernoulli variational distributions, requiring no additional model parameters. On the theoretical side, we cast dropout network training as approximate inference in Bayesian neural networks. This allows us to implement our model using existing tools in deep learning with no increase in time complexity, while highlighting a negative result in the field. We show a considerable improvement in classification accuracy compared to standard techniques and improve on published state-of-the-art results for CIFAR-10.

669 citations


Journal ArticleDOI
TL;DR: Additive manufacturing is gaining ground in the construction industry as mentioned in this paper, and the potential to improve on current construction methods is significant, and one of such methods being explored currently, both in the US and Europe, is additive manufacturing.
Abstract: Additive manufacturing is gaining ground in the construction industry. The potential to improve on current construction methods is significant. One of such methods being explored currently, both in...

669 citations


Journal ArticleDOI
TL;DR: There are also potential challenges with DL application in ophthalmology, including clinical and technical challenges, explainability of the algorithm results, medicolegal issues, and physician and patient acceptance of the AI ‘black-box’ algorithms.
Abstract: Artificial intelligence (AI) based on deep learning (DL) has sparked tremendous global interest in recent years. DL has been widely adopted in image recognition, speech recognition and natural language processing, but is only beginning to impact on healthcare. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography and visual fields, achieving robust classification performance in the detection of diabetic retinopathy and retinopathy of prematurity, the glaucoma-like disc, macular oedema and age-related macular degeneration. DL in ocular imaging may be used in conjunction with telemedicine as a possible solution to screen, diagnose and monitor major eye diseases for patients in primary care and community settings. Nonetheless, there are also potential challenges with DL application in ophthalmology, including clinical and technical challenges, explainability of the algorithm results, medicolegal issues, and physician and patient acceptance of the AI ‘black-box’ algorithms. DL could potentially revolutionise how ophthalmology is practised in the future. This review provides a summary of the state-of-the-art DL systems described for ophthalmic applications, potential challenges in clinical deployment and the path forward.

669 citations


Proceedings ArticleDOI
07 Dec 2015
TL;DR: This paper shows that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated and reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features.
Abstract: Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.

669 citations


Journal ArticleDOI
TL;DR: An international effort that started with exploratory meetings in 2014 in both Europe and the USA, and culminated with a Consensus Meeting held in Cincinnati, Ohio, USA in July 2016, the present guidelines related to the efficacy and most optimal treatment of short stature, infertility, hypertension, and hormonal replacement therapy.
Abstract: Turner syndrome affects 25-50 per 100,000 females and can involve multiple organs through all stages of life, necessitating multidisciplinary approach to care. Previous guidelines have highlighted this, but numerous important advances have been noted recently. These advances cover all specialty fields involved in the care of girls and women with TS. This paper is based on an international effort that started with exploratory meetings in 2014 in both Europe and the USA, and culminated with a Consensus Meeting held in Cincinnati, Ohio, USA in July 2016. Prior to this meeting, five groups each addressed important areas in TS care: 1) diagnostic and genetic issues, 2) growth and development during childhood and adolescence, 3) congenital and acquired cardiovascular disease, 4) transition and adult care, and 5) other comorbidities and neurocognitive issues. These groups produced proposals for the present guidelines. Additionally, four pertinent questions were submitted for formal GRADE (Grading of Recommendations, Assessment, Development and Evaluation) evaluation with a separate systematic review of the literature. These four questions related to the efficacy and most optimal treatment of short stature, infertility, hypertension, and hormonal replacement therapy. The guidelines project was initiated by the European Society for Endocrinology and the Pediatric Endocrine Society, in collaboration with The European Society for Pediatric Endocrinology, The Endocrine Society, European Society of Human Reproduction and Embryology, The American Heart Association, The Society for Endocrinology, and the European Society of Cardiology. The guideline has been formally endorsed by the European Society for Endocrinology, the Pediatric Endocrine Society, the European Society for Pediatric Endocrinology, the European Society of Human Reproduction and Embryology and the Endocrine Society. Advocacy groups appointed representatives who participated in pre-meeting discussions and in the consensus meeting.

Journal ArticleDOI
TL;DR: A suite of methods for extracting microplastics ingested by biota, including dissection, depuration, digestion and density separation are evaluated, and the urgent need for the standardisation of protocols is discussed to promote consistency in data collection and analysis is discussed.
Abstract: Microplastic debris (<5 mm) is a prolific environmental pollutant, found worldwide in marine, freshwater and terrestrial ecosystems. Interactions between biota and microplastics are prevalent, and there is growing evidence that microplastics can incite significant health effects in exposed organisms. To date, the methods used to quantify such interactions have varied greatly between studies. Here, we critically review methods for sampling, isolating and identifying microplastics ingested by environmentally and laboratory exposed fish and invertebrates. We aim to draw attention to the strengths and weaknesses of the suite of published microplastic extraction and enumeration techniques. Firstly, we highlight the risk of microplastic losses and accumulation during biotic sampling and storage, and suggest protocols for mitigating contamination in the field and laboratory. We evaluate a suite of methods for extracting microplastics ingested by biota, including dissection, depuration, digestion and density separation. Lastly, we consider the applicability of visual identification and chemical analyses in categorising microplastics. We discuss the urgent need for the standardisation of protocols to promote consistency in data collection and analysis. Harmonized methods will allow for more accurate assessment of the impacts and risks microplastics pose to biota and increase comparability between studies.

Journal ArticleDOI
TL;DR: Elexacaftor plus tezacaftorplus ivacaftors provided clinically robust benefit compared with tezacftor plus ivACaftor alone, with a favourable safety profile, and shows the potential to lead to transformative improvements in the lives of people with cystic fibrosis who are homozygous for the F508del mutation.

Journal ArticleDOI
12 Jul 2016-JAMA
TL;DR: In this article, the authors provide updated recommendations for the use of antiretroviral therapy in adults (aged ≥18 years) with established HIV infection, including when to start treatment, initial regimens, and changing regimens along with recommendations for using ARVs for preventing HIV among those at risk, including preexposure and postexposure prophylaxis.
Abstract: Importance New data and therapeutic options warrant updated recommendations for the use of antiretroviral drugs (ARVs) to treat or to prevent HIV infection in adults. Objective To provide updated recommendations for the use of antiretroviral therapy in adults (aged ≥18 years) with established HIV infection, including when to start treatment, initial regimens, and changing regimens, along with recommendations for using ARVs for preventing HIV among those at risk, including preexposure and postexposure prophylaxis. Evidence Review A panel of experts in HIV research and patient care convened by the International Antiviral Society–USA reviewed data published in peer-reviewed journals, presented by regulatory agencies, or presented as conference abstracts at peer-reviewed scientific conferences since the 2014 report, for new data or evidence that would change previous recommendations or their ratings. Comprehensive literature searches were conducted in the PubMed and EMBASE databases through April 2016. Recommendations were by consensus, and each recommendation was rated by strength and quality of the evidence. Findings Newer data support the widely accepted recommendation that antiretroviral therapy should be started in all individuals with HIV infection with detectable viremia regardless of CD4 cell count. Recommended optimal initial regimens for most patients are 2 nucleoside reverse transcriptase inhibitors (NRTIs) plus an integrase strand transfer inhibitor (InSTI). Other effective regimens include nonnucleoside reverse transcriptase inhibitors or boosted protease inhibitors with 2 NRTIs. Recommendations for special populations and in the settings of opportunistic infections and concomitant conditions are provided. Reasons for switching therapy include convenience, tolerability, simplification, anticipation of potential new drug interactions, pregnancy or plans for pregnancy, elimination of food restrictions, virologic failure, or drug toxicities. Laboratory assessments are recommended before treatment, and monitoring during treatment is recommended to assess response, adverse effects, and adherence. Approaches are recommended to improve linkage to and retention in care are provided. Daily tenofovir disoproxil fumarate/emtricitabine is recommended for use as preexposure prophylaxis to prevent HIV infection in persons at high risk. When indicated, postexposure prophylaxis should be started as soon as possible after exposure. Conclusions and Relevance Antiretroviral agents remain the cornerstone of HIV treatment and prevention. All HIV-infected individuals with detectable plasma virus should receive treatment with recommended initial regimens consisting of an InSTI plus 2 NRTIs. Preexposure prophylaxis should be considered as part of an HIV prevention strategy for at-risk individuals. When used effectively, currently available ARVs can sustain HIV suppression and can prevent new HIV infection. With these treatment regimens, survival rates among HIV-infected adults who are retained in care can approach those of uninfected adults.

Posted Content
TL;DR: This paper proposes to represent a "fast" reinforcement learning algorithm as a recurrent neural network (RNN) and learn it from data, encoded in the weights of the RNN, which are learned slowly through a general-purpose ("slow") RL algorithm.
Abstract: Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge this gap. Rather than designing a "fast" reinforcement learning algorithm, we propose to represent it as a recurrent neural network (RNN) and learn it from data. In our proposed method, RL$^2$, the algorithm is encoded in the weights of the RNN, which are learned slowly through a general-purpose ("slow") RL algorithm. The RNN receives all information a typical RL algorithm would receive, including observations, actions, rewards, and termination flags; and it retains its state across episodes in a given Markov Decision Process (MDP). The activations of the RNN store the state of the "fast" RL algorithm on the current (previously unseen) MDP. We evaluate RL$^2$ experimentally on both small-scale and large-scale problems. On the small-scale side, we train it to solve randomly generated multi-arm bandit problems and finite MDPs. After RL$^2$ is trained, its performance on new MDPs is close to human-designed algorithms with optimality guarantees. On the large-scale side, we test RL$^2$ on a vision-based navigation task and show that it scales up to high-dimensional problems.

Journal ArticleDOI
10 Oct 2017-JAMA
TL;DR: In patients with moderate to severe ARDS, a strategy with lung recruitment and titrated PEEP compared with low PEEP increased 28-day all-cause mortality, and these findings do not support the routine use of lung recruitment maneuver and PEEP titration in these patients.
Abstract: Importance The effects of recruitment maneuvers and positive end-expiratory pressure (PEEP) titration on clinical outcomes in patients with acute respiratory distress syndrome (ARDS) remain uncertain. Objective To determine if lung recruitment associated with PEEP titration according to the best respiratory-system compliance decreases 28-day mortality of patients with moderate to severe ARDS compared with a conventional low-PEEP strategy. Design, Setting, and Participants Multicenter, randomized trial conducted at 120 intensive care units (ICUs) from 9 countries from November 17, 2011, through April 25, 2017, enrolling adults with moderate to severe ARDS. Interventions An experimental strategy with a lung recruitment maneuver and PEEP titration according to the best respiratory–system compliance (n = 501; experimental group) or a control strategy of low PEEP (n = 509). All patients received volume-assist control mode until weaning. Main Outcomes and Measures The primary outcome was all-cause mortality until 28 days. Secondary outcomes were length of ICU and hospital stay; ventilator-free days through day 28; pneumothorax requiring drainage within 7 days; barotrauma within 7 days; and ICU, in-hospital, and 6-month mortality. Results A total of 1010 patients (37.5% female; mean [SD] age, 50.9 [17.4] years) were enrolled and followed up. At 28 days, 277 of 501 patients (55.3%) in the experimental group and 251 of 509 patients (49.3%) in the control group had died (hazard ratio [HR], 1.20; 95% CI, 1.01 to 1.42; P = .041). Compared with the control group, the experimental group strategy increased 6-month mortality (65.3% vs 59.9%; HR, 1.18; 95% CI, 1.01 to 1.38; P = .04), decreased the number of mean ventilator-free days (5.3 vs 6.4; difference, −1.1; 95% CI, −2.1 to −0.1; P = .03), increased the risk of pneumothorax requiring drainage (3.2% vs 1.2%; difference, 2.0%; 95% CI, 0.0% to 4.0%; P = .03), and the risk of barotrauma (5.6% vs 1.6%; difference, 4.0%; 95% CI, 1.5% to 6.5%; P = .001). There were no significant differences in the length of ICU stay, length of hospital stay, ICU mortality, and in-hospital mortality. Conclusions and Relevance In patients with moderate to severe ARDS, a strategy with lung recruitment and titrated PEEP compared with low PEEP increased 28-day all-cause mortality. These findings do not support the routine use of lung recruitment maneuver and PEEP titration in these patients. Trial Registration clinicaltrials.gov Identifier:NCT01374022

Journal ArticleDOI
Eun Hee Kang1, Junhong Min1, Jong Chul Ye1
TL;DR: This work proposes an algorithm which uses a deep convolutional neural network which is applied to the wavelet transform coefficients of low‐dose CT images and effectively removes complex noise patterns from CT images derived from a reduced X‐ray dose.
Abstract: Purpose Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach Method We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise In addition, our CNN is designed with a residual learning architecture for faster network training and better performance Results Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 “Low-Dose CT Grand Challenge” Conclusions To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from large data sets Therefore, we believe that the proposed algorithm opens a new direction in the area of low-dose CT research

Journal ArticleDOI
TL;DR: An IL1-induced signaling cascade that leads to JAK/STAT activation and promotes an inflammatory CAF state is identified, suggesting multiple strategies to target these cells in vivo and illuminating strategies to selectively target CAFs that support tumor growth.
Abstract: Pancreatic ductal adenocarcinoma (PDAC) is poorly responsive to therapies and histologically contains a paucity of neoplastic cells embedded within a dense desmoplastic stroma. Within the stroma, cancer-associated fibroblasts (CAFs) secrete tropic factors and extracellular matrix components, and have been implicated in PDAC progression and chemotherapy resistance. We recently identified two distinct CAF subtypes characterized by either myofibroblastic or inflammatory phenotypes; however, the mechanisms underlying their diversity and their roles in PDAC remain unknown. Here, we use organoid and mouse models to identify TGF-beta and IL-1 as tumor-secreted ligands that promote CAF heterogeneity. We show that IL-1 induces LIF expression and downstream JAK/STAT activation to generate inflammatory CAFs, and demonstrate that TGF-beta antagonizes this process by downregulating IL-1R1 expression and promoting differentiation into myofibroblasts. Our results provide a mechanism through which distinct fibroblast niches are established in the PDAC microenvironment and illuminate strategies to selectively target CAFs that support tumor growth.

Journal ArticleDOI
Markus Ackermann, Marco Ajello1, W. B. Atwood2, Luca Baldini3  +180 moreInstitutions (41)
TL;DR: The third catalog of active galactic nuclei (AGNs) detected by the Fermi-LAT (3LAC) is presented in this paper, which is based on the 3FGL of sources detected between 100 MeV and 300 GeV.
Abstract: The third catalog of active galactic nuclei (AGNs) detected by the Fermi-LAT (3LAC) is presented. It is based on the third Fermi-LAT catalog (3FGL) of sources detected between 100 MeV and 300 GeV w ...

Journal ArticleDOI
17 Feb 2015-PLOS ONE
TL;DR: This work presents a new semi-automated dasymetric modeling approach that incorporates detailed census and ancillary data in a flexible, “Random Forest” estimation technique, and outlines how this algorithm will be extended to provide freely-available gridded population data sets for Africa, Asia and Latin America.
Abstract: High resolution, contemporary data on human population distributions are vital for measuring impacts of population growth, monitoring human-environment interactions and for planning and policy development. Many methods are used to disaggregate census data and predict population densities for finer scale, gridded population data sets. We present a new semi-automated dasymetric modeling approach that incorporates detailed census and ancillary data in a flexible, “Random Forest” estimation technique. We outline the combination of widely available, remotely-sensed and geospatial data that contribute to the modeled dasymetric weights and then use the Random Forest model to generate a gridded prediction of population density at ~100 m spatial resolution. This prediction layer is then used as the weighting surface to perform dasymetric redistribution of the census counts at a country level. As a case study we compare the new algorithm and its products for three countries (Vietnam, Cambodia, and Kenya) with other common gridded population data production methodologies. We discuss the advantages of the new method and increases over the accuracy and flexibility of those previous approaches. Finally, we outline how this algorithm will be extended to provide freely-available gridded population data sets for Africa, Asia and Latin America.

Journal ArticleDOI
TL;DR: Efforts to increase women’s participation in computer science, engineering, and physics may benefit from changing masculine cultures and providing students with early experiences that signal equally to both girls and boys that they belong and can succeed in these fields.
Abstract: Women obtain more than half of U.S. undergraduate degrees in biology, chemistry, and mathematics, yet they earn less than 20% of computer science, engineering, and physics undergraduate degrees (National Science Foundation, 2014a). Gender differences in interest in computer science, engineering, and physics appear even before college. Why are women represented in some science, technology, engineering, and mathematics (STEM) fields more than others? We conduct a critical review of the most commonly cited factors explaining gender disparities in STEM participation and investigate whether these factors explain differential gender participation across STEM fields. Math performance and discrimination influence who enters STEM, but there is little evidence to date that these factors explain why women's underrepresentation is relatively worse in some STEM fields. We introduce a model with three overarching factors to explain the larger gender gaps in participation in computer science, engineering, and physics than in biology, chemistry, and mathematics: (a) masculine cultures that signal a lower sense of belonging to women than men, (b) a lack of sufficient early experience with computer science, engineering, and physics, and (c) gender gaps in self-efficacy. Efforts to increase women's participation in computer science, engineering, and physics may benefit from changing masculine cultures and providing students with early experiences that signal equally to both girls and boys that they belong and can succeed in these fields. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: The ASTRA Toolbox provides an extensive set of fast and flexible building blocks that can be used to develop advanced reconstruction algorithms, effectively removing limitations in the geometrical parameters of the acquisition model and the algorithms used for reconstruction.

Journal ArticleDOI
TL;DR: The findings suggest that although single-pathogen strategies have an important role in the reduction of the burden of severe diarrhoea disease, the effect of such interventions on total diarrhoeal incidence at the community level might be limited.

Proceedings ArticleDOI
Heechul Jung1, Sihaeng Lee1, Junho Yim1, Sunjeong Park1, Junmo Kim1 
07 Dec 2015
TL;DR: A deep learning technique, which is regarded as a tool to automatically extract useful features from raw data, is adopted and is combined using a new integration method in order to boost the performance of the facial expression recognition.
Abstract: Temporal information has useful features for recognizing facial expressions. However, to manually design useful features requires a lot of effort. In this paper, to reduce this effort, a deep learning technique, which is regarded as a tool to automatically extract useful features from raw data, is adopted. Our deep network is based on two different models. The first deep network extracts temporal appearance features from image sequences, while the other deep network extracts temporal geometry features from temporal facial landmark points. These two models are combined using a new integration method in order to boost the performance of the facial expression recognition. Through several experiments, we show that the two models cooperate with each other. As a result, we achieve superior performance to other state-of-the-art methods in the CK+ and Oulu-CASIA databases. Furthermore, we show that our new integration method gives more accurate results than traditional methods, such as a weighted summation and a feature concatenation method.

Posted Content
TL;DR: Experimental results show that ERNIE outperforms other baseline methods, achieving new state-of-the-art results on five Chinese natural language processing tasks including natural language inference, semantic similarity, named entity recognition, sentiment analysis and question answering.
Abstract: We present a novel language representation model enhanced by knowledge called ERNIE (Enhanced Representation through kNowledge IntEgration). Inspired by the masking strategy of BERT, ERNIE is designed to learn language representation enhanced by knowledge masking strategies, which includes entity-level masking and phrase-level masking. Entity-level strategy masks entities which are usually composed of multiple words.Phrase-level strategy masks the whole phrase which is composed of several words standing together as a conceptual unit.Experimental results show that ERNIE outperforms other baseline methods, achieving new state-of-the-art results on five Chinese natural language processing tasks including natural language inference, semantic similarity, named entity recognition, sentiment analysis and question answering. We also demonstrate that ERNIE has more powerful knowledge inference capacity on a cloze test.

Journal ArticleDOI
TL;DR: In this article, a call to action targets a reversal of paradigms, from a carbon-centric model to one that treats the hydrologic and climate cooling effects of trees and forests as the first order of priority.
Abstract: Forest-driven water and energy cycles are poorly integrated into regional, national, continental and global decision-making on climate change adaptation, mitigation, land use and water management. This constrains humanity's ability to protect our planet's climate and life-sustaining functions. The substantial body of research we review reveals that forest, water and energy interactions provide the foundations for carbon storage, for cooling terrestrial surfaces and for distributing water resources. Forests and trees must be recognized as prime regulators within the water, energy and carbon cycles. If these functions are ignored, planners will be unable to assess, adapt to or mitigate the impacts of changing land cover and climate. Our call to action targets a reversal of paradigms, from a carbon-centric model to one that treats the hydrologic and climate-cooling effects of trees and forests as the first order of priority. For reasons of sustainability, carbon storage must remain a secondary, though valuable, by-product. The effects of tree cover on climate at local, regional and continental scales offer benefits that demand wider recognition. The forest- and tree-centered research insights we review and analyze provide a knowledge-base for improving plans, policies and actions. Our understanding of how trees and forests influence water, energy and carbon cycles has important implications, both for the structure of planning, management and governance institutions, as well as for how trees and forests might be used to improve sustainability, adaptation and mitigation efforts.

Journal ArticleDOI
TL;DR: Existing evidence suggests that mental disorders tend to be highly prevalent in war refugees many years after resettlement, and there is a need for more methodologically consistent and rigorous research on the mental health of long-settled war refugees.
Abstract: There are several million war-refugees worldwide, majority of whom stay in the recipient countries for years. However, little is known about their long-term mental health. This review aimed to assess prevalence of mental disorders and to identify their correlates among long-settled war-refugees. We conducted a systematic review of studies that assessed current prevalence and/or factors associated with depression and anxiety disorders in adult war-refugees 5 years or longer after displacement. We searched Medline, Embase, CINAHL, PsycINFO, and PILOTS from their inception to October 2014, searched reference lists, and contacted experts. Because of a high heterogeneity between studies, overall estimates of mental disorders were not discussed. Instead, prevalence rates were reviewed narratively and possible sources of heterogeneity between studies were investigated both by subgroup analysis and narratively. A descriptive analysis examined pre-migration and post-migration factors associated with mental disorders in this population. The review identified 29 studies on long-term mental health with a total of 16,010 war-affected refugees. There was significant between-study heterogeneity in prevalence rates of depression (range 2.3–80 %), PTSD (4.4–86 %), and unspecified anxiety disorder (20.3–88 %), although prevalence estimates were typically in the range of 20 % and above. Both clinical and methodological factors contributed substantially to the observed heterogeneity. Studies of higher methodological quality generally reported lower prevalence rates. Prevalence rates were also related to both which country the refugees came from and in which country they resettled. Refugees from former Yugoslavia and Cambodia tended to report the highest rates of mental disorders, as well as refugees residing in the USA. Descriptive synthesis suggested that greater exposure to pre-migration traumatic experiences and post-migration stress were the most consistent factors associated with all three disorders, whilst a poor post-migration socio-economic status was particularly associated with depression. There is a need for more methodologically consistent and rigorous research on the mental health of long-settled war refugees. Existing evidence suggests that mental disorders tend to be highly prevalent in war refugees many years after resettlement. This increased risk may not only be a consequence of exposure to wartime trauma but may also be influenced by post-migration socio-economic factors.

Journal ArticleDOI
05 Oct 2018-Science
TL;DR: The concept of a light-dependent activation barrier is introduced to account for the effect of light illumination on electronic and thermal excitations in a single unified picture and provides insight into the specific role of hot carriers in plasmon-mediated photochemistry, which is critically important for designing energy-efficient plAsmonic photocatalysts.
Abstract: Photocatalysis based on optically active, “plasmonic” metal nanoparticles has emerged as a promising approach to facilitate light-driven chemical conversions under far milder conditions than thermal catalysis. However, an understanding of the relation between thermal and electronic excitations has been lacking. We report the substantial light-induced reduction of the thermal activation barrier for ammonia decomposition on a plasmonic photocatalyst. We introduce the concept of a light-dependent activation barrier to account for the effect of light illumination on electronic and thermal excitations in a single unified picture. This framework provides insight into the specific role of hot carriers in plasmon-mediated photochemistry, which is critically important for designing energy-efficient plasmonic photocatalysts.

Journal ArticleDOI
TL;DR: In this paper, the authors study how stock markets react to positive and negative events concerned with a firm's corporate social responsibility (CSR), and they show that investors respond strongly negatively to negative events and weakly negatively to positive events.

Journal ArticleDOI
TL;DR: In this article, the authors identify specific mechanisms through which automation may affect travel and energy demand and resulting GHG emissions and bring them together using a coherent energy decomposition framework, and explore the net effects of automation on emissions through several illustrative scenarios.
Abstract: Experts predict that new automobiles will be capable of driving themselves under limited conditions within 5–10 years, and under most conditions within 10–20 years. Automation may affect road vehicle energy consumption and greenhouse gas (GHG) emissions in a host of ways, positive and negative, by causing changes in travel demand, vehicle design, vehicle operating profiles, and choices of fuels. In this paper, we identify specific mechanisms through which automation may affect travel and energy demand and resulting GHG emissions and bring them together using a coherent energy decomposition framework. We review the literature for estimates of the energy impacts of each mechanism and, where the literature is lacking, develop our own estimates using engineering and economic analysis. We consider how widely applicable each mechanism is, and quantify the potential impact of each mechanism on a common basis: the percentage change it is expected to cause in total GHG emissions from light-duty or heavy-duty vehicles in the U.S. Our primary focus is travel related energy consumption and emissions, since potential lifecycle impacts are generally smaller in magnitude. We explore the net effects of automation on emissions through several illustrative scenarios, finding that automation might plausibly reduce road transport GHG emissions and energy use by nearly half – or nearly double them – depending on which effects come to dominate. We also find that many potential energy-reduction benefits may be realized through partial automation, while the major energy/emission downside risks appear more likely at full automation. We close by presenting some implications for policymakers and identifying priority areas for further research.