scispace - formally typeset
Search or ask a question

Showing papers by "Lancaster University published in 2016"


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an extension to the Consolidated Standards of Reporting Trials (CONSORT) statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT.
Abstract: The Consolidated Standards of Reporting Trials (CONSORT) statement is a guideline designed to improve the transparency and quality of the reporting of randomised controlled trials (RCTs). In this article we present an extension to that statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT. The checklist applies to any randomised study in which a future definitive RCT, or part of it, is conducted on a smaller scale, regardless of its design (eg, cluster, factorial, crossover) or the terms used by authors to describe the study (eg, pilot, feasibility, trial, study). The extension does not directly apply to internal pilot studies built into the design of a main trial, non-randomised pilot and feasibility studies, or phase II studies, but these studies all have some similarities to randomised pilot and feasibility studies and so many of the principles might also apply. The development of the extension was motivated by the growing number of studies described as feasibility or pilot studies and by research that has identified weaknesses in their reporting and conduct. We followed recommended good practice to develop the extension, including carrying out a Delphi survey, holding a consensus meeting and research team meetings, and piloting the checklist. The aims and objectives of pilot and feasibility randomised studies differ from those of other randomised trials. Consequently, although much of the information to be reported in these trials is similar to those in randomised controlled trials (RCTs) assessing effectiveness and efficacy, there are some key differences in the type of information and in the appropriate interpretation of standard CONSORT reporting items. We have retained some of the original CONSORT statement items, but most have been adapted, some removed, and new items added. The new items cover how participants were identified and consent obtained; if applicable, the prespecified criteria used to judge whether or how to proceed with a future definitive RCT; if relevant, other important unintended consequences; implications for progression from pilot to future definitive RCT, including any proposed amendments; and ethical approval or approval by a research review committee confirmed with a reference number. This article includes the 26 item checklist, a separate checklist for the abstract, a template for a CONSORT flowchart for these studies, and an explanation of the changes made and supporting examples. We believe that routine use of this proposed extension to the CONSORT statement will result in improvements in the reporting of pilot trials. Editor’s note: In order to encourage its wide dissemination this article is freely accessible on the BMJ and Pilot and Feasibility Studies journal websites.

1,799 citations


Journal ArticleDOI
07 Apr 2016
TL;DR: In this paper, the authors explore and discuss how soil scientists can help to reach the recently adopted UN Sustainable Development Goals (SDGs) in the most effective manner and recommend the following steps to be taken by the soil science community as a whole: (i) embrace the UN SDGs, as they provide a platform that allows soil science to demonstrate its relevance for realizing a sustainable society by 2030; (ii) show the specific value of soil science: research should explicitly show how using modern soil information can improve the results of inter-and transdisciplinary studies on SDGs related to food security
Abstract: . In this forum paper we discuss how soil scientists can help to reach the recently adopted UN Sustainable Development Goals (SDGs) in the most effective manner. Soil science, as a land-related discipline, has important links to several of the SDGs, which are demonstrated through the functions of soils and the ecosystem services that are linked to those functions (see graphical abstract in the Supplement). We explore and discuss how soil scientists can rise to the challenge both internally, in terms of our procedures and practices, and externally, in terms of our relations with colleague scientists in other disciplines, diverse groups of stakeholders and the policy arena. To meet these goals we recommend the following steps to be taken by the soil science community as a whole: (i) embrace the UN SDGs, as they provide a platform that allows soil science to demonstrate its relevance for realizing a sustainable society by 2030; (ii) show the specific value of soil science: research should explicitly show how using modern soil information can improve the results of inter- and transdisciplinary studies on SDGs related to food security, water scarcity, climate change, biodiversity loss and health threats; (iii) take leadership in overarching system analysis of ecosystems, as soils and soil scientists have an integrated nature and this places soil scientists in a unique position; (iii) raise awareness of soil organic matter as a key attribute of soils to illustrate its importance for soil functions and ecosystem services; (iv) improve the transfer of knowledge through knowledge brokers with a soil background; (v) start at the basis: educational programmes are needed at all levels, starting in primary schools, and emphasizing practical, down-to-earth examples; (vi) facilitate communication with the policy arena by framing research in terms that resonate with politicians in terms of the policy cycle or by considering drivers, pressures and responses affecting impacts of land use change; and finally (vii) all this is only possible if researchers, with soil scientists in the front lines, look over the hedge towards other disciplines, to the world at large and to the policy arena, reaching over to listen first, as a basis for genuine collaboration.

1,010 citations


Journal ArticleDOI
TL;DR: This paper presents an overview of SA and its link to uncertainty analysis, model calibration and evaluation, robust decision-making, and provides practical guidelines by developing a workflow for the application of SA.
Abstract: Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. In this paper we review the SA literature with the goal of providing: (i) a comprehensive view of SA approaches also in relation to other methodologies for model identification and application; (ii) a systematic classification of the most commonly used SA methods; (iii) practical guidelines for the application of SA. The paper aims at delivering an introduction to SA for non-specialist readers, as well as practical advice with best practice examples from the literature; and at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research. We present an overview of SA and its link to uncertainty analysis, model calibration and evaluation, robust decision-making.We provide a systematic review of existing approaches, which can support users in the choice of an SA method.We provide practical guidelines by developing a workflow for the application of SA and discuss critical choices.We give best practice examples from the literature and highlight trends and gaps for future research.

888 citations


Journal ArticleDOI
18 Nov 2016-Science
TL;DR: The bioengineering of an accelerated response to natural shading events in Nicotiana (tobacco) is described, resulting in increased leaf carbon dioxide uptake and plant dry matter productivity by about 15% in fluctuating light.
Abstract: Crop leaves in full sunlight dissipate damaging excess absorbed light energy as heat. When sunlit leaves are shaded by clouds or other leaves, this protective dissipation continues for many minutes and reduces photosynthesis. Calculations have shown that this could cost field crops up to 20% of their potential yield. Here, we describe the bioengineering of an accelerated response to natural shading events in Nicotiana (tobacco), resulting in increased leaf carbon dioxide uptake and plant dry matter productivity by about 15% in fluctuating light. Because the photoprotective mechanism that has been altered is common to all flowering plants and crops, the findings provide proof of concept for a route to obtaining a sustainable increase in productivity for food crops and a much-needed yield jump.

882 citations


Journal ArticleDOI
TL;DR: A robust approach for sample preparation, instrumentation, acquisition parameters and data processing is explored and it is expected that a typical Raman experiment can be performed by a nonspecialist user to generate high-quality data for biological materials analysis.
Abstract: Raman spectroscopy can be used to measure the chemical composition of a sample, which can in turn be used to extract biological information. Many materials have characteristic Raman spectra, which means that Raman spectroscopy has proven to be an effective analytical approach in geology, semiconductor, materials and polymer science fields. The application of Raman spectroscopy and microscopy within biology is rapidly increasing because it can provide chemical and compositional information, but it does not typically suffer from interference from water molecules. Analysis does not conventionally require extensive sample preparation; biochemical and structural information can usually be obtained without labeling. In this protocol, we aim to standardize and bring together multiple experimental approaches from key leaders in the field for obtaining Raman spectra using a microspectrometer. As examples of the range of biological samples that can be analyzed, we provide instructions for acquiring Raman spectra, maps and images for fresh plant tissue, formalin-fixed and fresh frozen mammalian tissue, fixed cells and biofluids. We explore a robust approach for sample preparation, instrumentation, acquisition parameters and data processing. By using this approach, we expect that a typical Raman experiment can be performed by a nonspecialist user to generate high-quality data for biological materials analysis.

814 citations


Journal ArticleDOI
TL;DR: This work considers whether wearable technology can become a valuable asset for health care and investigates the role that smartwatches can play in this process.
Abstract: Lukasz Piwek and colleagues consider whether wearable technology can become a valuable asset for health care.

788 citations


Journal ArticleDOI
15 Mar 2016-PLOS ONE
TL;DR: A framework for defining pilot and feasibility studies focusing on studies conducted in preparation for a randomised controlled trial is described, suggesting that to facilitate their identification, these studies should be clearly identified using the terms ‘feasibility’ or ‘pilot’ as appropriate.
Abstract: We describe a framework for defining pilot and feasibility studies focusing on studies conducted in preparation for a randomised controlled trial. To develop the framework, we undertook a Delphi survey; ran an open meeting at a trial methodology conference; conducted a review of definitions outside the health research context; consulted experts at an international consensus meeting; and reviewed 27 empirical pilot or feasibility studies. We initially adopted mutually exclusive definitions of pilot and feasibility studies. However, some Delphi survey respondents and the majority of open meeting attendees disagreed with the idea of mutually exclusive definitions. Their viewpoint was supported by definitions outside the health research context, the use of the terms 'pilot' and 'feasibility' in the literature, and participants at the international consensus meeting. In our framework, pilot studies are a subset of feasibility studies, rather than the two being mutually exclusive. A feasibility study asks whether something can be done, should we proceed with it, and if so, how. A pilot study asks the same questions but also has a specific design feature: in a pilot study a future study, or part of a future study, is conducted on a smaller scale. We suggest that to facilitate their identification, these studies should be clearly identified using the terms 'feasibility' or 'pilot' as appropriate. This should include feasibility studies that are largely qualitative; we found these difficult to identify in electronic searches because researchers rarely used the term 'feasibility' in the title or abstract of such studies. Investigators should also report appropriate objectives and methods related to feasibility; and give clear confirmation that their study is in preparation for a future randomised controlled trial designed to assess the effect of an intervention.

756 citations


Journal ArticleDOI
07 Jul 2016-Nature
TL;DR: In this article, the authors used a large data set of plants, birds and dung beetles (1,538, 460 and 156 species, respectively) sampled in 36 catchments in the Brazilian state of Para.
Abstract: Concerted political attention has focused on reducing deforestation, and this remains the cornerstone of most biodiversity conservation strategies. However, maintaining forest cover may not reduce anthropogenic forest disturbances, which are rarely considered in conservation programmes. These disturbances occur both within forests, including selective logging and wildfires, and at the landscape level, through edge, area and isolation effects. Until now, the combined effect of anthropogenic disturbance on the conservation value of remnant primary forests has remained unknown, making it impossible to assess the relative importance of forest disturbance and forest loss. Here we address these knowledge gaps using a large data set of plants, birds and dung beetles (1,538, 460 and 156 species, respectively) sampled in 36 catchments in the Brazilian state of Para. Catchments retaining more than 69–80% forest cover lost more conservation value from disturbance than from forest loss. For example, a 20% loss of primary forest, the maximum level of deforestation allowed on Amazonian properties under Brazil’s Forest Code, resulted in a 39–54% loss of conservation value: 96–171% more than expected without considering disturbance effects. We extrapolated the disturbance-mediated loss of conservation value throughout Para, which covers 25% of the Brazilian Amazon. Although disturbed forests retained considerable conservation value compared with deforested areas, the toll of disturbance outside Para’s strictly protected areas is equivalent to the loss of 92,000–139,000 km2 of primary forest. Even this lowest estimate is greater than the area deforested across the entire Brazilian Amazon between 2006 and 2015 (ref. 10). Species distribution models showed that both landscape and within-forest disturbances contributed to biodiversity loss, with the greatest negative effects on species of high conservation and functional value. These results demonstrate an urgent need for policy interventions that go beyond the maintenance of forest cover to safeguard the hyper-diversity of tropical forest ecosystems.

698 citations


Journal ArticleDOI
TL;DR: Kirschvink et al. as discussed by the authors used magnetic analyses and electron microscopy to identify the abundant presence in the brain of magnetite nanoparticles that are consistent with high-temperature formation, suggesting, therefore, an external, not internal, source.
Abstract: Biologically formed nanoparticles of the strongly magnetic mineral, magnetite, were first detected in the human brain over 20 y ago [Kirschvink JL, Kobayashi-Kirschvink A, Woodford BJ (1992) Proc Natl Acad Sci USA 89(16):7683-7687]. Magnetite can have potentially large impacts on the brain due to its unique combination of redox activity, surface charge, and strongly magnetic behavior. We used magnetic analyses and electron microscopy to identify the abundant presence in the brain of magnetite nanoparticles that are consistent with high-temperature formation, suggesting, therefore, an external, not internal, source. Comprising a separate nanoparticle population from the euhedral particles ascribed to endogenous sources, these brain magnetites are often found with other transition metal nanoparticles, and they display rounded crystal morphologies and fused surface textures, reflecting crystallization upon cooling from an initially heated, iron-bearing source material. Such high-temperature magnetite nanospheres are ubiquitous and abundant in airborne particulate matter pollution. They arise as combustion-derived, iron-rich particles, often associated with other transition metal particles, which condense and/or oxidize upon airborne release. Those magnetite pollutant particles which are <∼200 nm in diameter can enter the brain directly via the olfactory bulb. Their presence proves that externally sourced iron-bearing nanoparticles, rather than their soluble compounds, can be transported directly into the brain, where they may pose hazard to human health.

697 citations


Journal ArticleDOI
04 Mar 2016-Science
TL;DR: Graphene hosts a unique electron system in which electron-phonon scattering is extremely weak but electron-electron collisions are sufficiently frequent to provide local equilibrium above the temperature of liquid nitrogen, under which electrons can behave as a viscous liquid and exhibit hydrodynamic phenomena similar to classical liquids.
Abstract: Graphene hosts a unique electron system in which electron-phonon scattering is extremely weak but electron-electron collisions are sufficiently frequent to provide local equilibrium above the temperature of liquid nitrogen. Under these conditions, electrons can behave as a viscous liquid and exhibit hydrodynamic phenomena similar to classical liquids. Here we report strong evidence for this transport regime. We found that doped graphene exhibits an anomalous (negative) voltage drop near current-injection contacts, which is attributed to the formation of submicrometer-size whirlpools in the electron flow. The viscosity of graphene’s electron liquid is found to be ~0.1 square meters per second, an order of magnitude higher than that of honey, in agreement with many-body theory. Our work demonstrates the possibility of studying electron hydrodynamics using high-quality graphene.

Proceedings ArticleDOI
13 Nov 2016
TL;DR: This paper develops models describing LoRa communication behaviour and uses these models to parameterise a LoRa simulation to study scalability, showing that a typical smart city deployment can support 120 nodes per 3.8 ha, which is not sufficient for future IoT deployments.
Abstract: New Internet of Things (IoT) technologies such as Long Range (LoRa) are emerging which enable power efficient wireless communication over very long distances. Devices typically communicate directly to a sink node which removes the need of constructing and maintaining a complex multi-hop network. Given the fact that a wide area is covered and that all devices communicate directly to a few sink nodes a large number of nodes have to share the communication medium. LoRa provides for this reason a range of communication options (centre frequency, spreading factor, bandwidth, coding rates) from which a transmitter can choose. Many combination settings are orthogonal and provide simultaneous collision free communications. Nevertheless, there is a limit regarding the number of transmitters a LoRa system can support. In this paper we investigate the capacity limits of LoRa networks. Using experiments we develop models describing LoRa communication behaviour. We use these models to parameterise a LoRa simulation to study scalability. Our experiments show that a typical smart city deployment can support 120 nodes per 3.8 ha, which is not sufficient for future IoT deployments. LoRa networks can scale quite well, however, if they use dynamic communication parameter selection and/or multiple sinks.

Journal ArticleDOI
TL;DR: A novel dynamic power allocation scheme is proposed to downlink and uplink non-orthogonal multiple access (NOMA) scenarios with two users for more flexibly meeting various quality of service requirements.
Abstract: In this paper, a novel dynamic power allocation scheme is proposed to downlink and uplink non-orthogonal multiple access (NOMA) scenarios with two users for more flexibly meeting various quality of service requirements. The exact expressions for the outage probability and the average rate achieved by the proposed scheme, as well as their high signal-to-noise ratio approximations, are established. Compared with the existing works, such as NOMA with fixed power allocation and cognitive radio inspired NOMA, the proposed scheme can: 1) strictly guarantee a performance gain over conventional orthogonal multiple access; and 2) offer more flexibility to realize different tradeoffs between the user fairness and system throughput. Monte Carlo simulation results are provided to demonstrate the accuracy of the developed analytical results and the performance gain of the proposed power allocation scheme.

Journal ArticleDOI
TL;DR: A charging mechanism parameter is introduced that quantifies the mechanism and allows comparisons between different systems, and is found to depend strongly on the polarization of the electrode, and the choice of the electrolyte and electrode materials.
Abstract: Supercapacitors (or electric double-layer capacitors) are high-power energy storage devices that store charge at the interface between porous carbon electrodes and an electrolyte solution. These devices are already employed in heavy electric vehicles and electronic devices, and can complement batteries in a more sustainable future. Their widespread application could be facilitated by the development of devices that can store more energy, without compromising their fast charging and discharging times. In situ characterization methods and computational modeling techniques have recently been developed to study the molecular mechanisms of charge storage, with the hope that better devices can be rationally designed. In this Perspective, we bring together recent findings from a range of experimental and computational studies to give a detailed picture of the charging mechanisms of supercapacitors. Nuclear magnetic resonance experiments and molecular dynamics simulations have revealed that the electrode pores contain a considerable number of ions in the absence of an applied charging potential. Experiments and computer simulations have shown that different charging mechanisms can then operate when a potential is applied, going beyond the traditional view of charging by counter-ion adsorption. It is shown that charging almost always involves ion exchange (swapping of co-ions for counter-ions), and rarely occurs by counter-ion adsorption alone. We introduce a charging mechanism parameter that quantifies the mechanism and allows comparisons between different systems. The mechanism is found to depend strongly on the polarization of the electrode, and the choice of the electrolyte and electrode materials. In light of these advances we identify new directions for supercapacitor research. Further experimental and computational work is needed to explain the factors that control supercapacitor charging mechanisms, and to establish the links between mechanisms and performance. Increased understanding and control of charging mechanisms should lead to new strategies for developing next-generation supercapacitors with improved performances.

Journal ArticleDOI
David Otley1
TL;DR: It is suggested that the narrow view of contingency that relies on responses to generally applicable questionnaires needs to be replaced by a more tailored approach that takes into account the context of specific organizations.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2828 moreInstitutions (191)
TL;DR: In this article, the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015 was evaluated using the Monte Carlo simulations.
Abstract: This article documents the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015. Using a large sample of J/ψ→μμ and Z→μμ decays from 3.2 fb−1 of pp collision data, measurements of the reconstruction efficiency, as well as of the momentum scale and resolution, are presented and compared to Monte Carlo simulations. The reconstruction efficiency is measured to be close to 99% over most of the covered phase space (|η| 2.2, the pT resolution for muons from Z→μμ decays is 2.9% while the precision of the momentum scale for low-pT muons from J/ψ→μμ decays is about 0.2%.

Journal ArticleDOI
TL;DR: In this paper, the authors conceptualise energy use from a capabilities perspective, informed by the work of Amartya Sen, Martha Nussbaum and others following them, and suggest a corresponding definition of energy poverty, as understood in the capabilities space.

Journal ArticleDOI
TL;DR: This work studies the downlink sum rate maximization problem, when the NOMA principle is applied, and the conditions under which the achievable rates maximization can be further simplified to a low complexity design problem, and compute the probability of occurrence of this event.
Abstract: Non-orthogonal multiple access (NOMA) systems have the potential to deliver higher system throughput, compared with contemporary orthogonal multiple access techniques. For a linearly precoded multiple-input single-output (MISO) system, we study the downlink sum rate maximization problem, when the NOMA principle is applied. Being a non-convex and intractable optimization problem, we resort to approximate it with a minorization-maximization algorithm (MMA), which is a widely used tool in statistics. In each step of the MMA, we solve a second-order cone program, such that the feasibility set in each step contains that of the previous one, and is always guaranteed to be a subset of the feasibility set of the original problem. It should be noted that the algorithm takes a few iterations to converge. Furthermore, we study the conditions under which the achievable rates maximization can be further simplified to a low complexity design problem, and we compute the probability of occurrence of this event. Numerical examples are conducted to show a comparison of the proposed approach against conventional multiple access systems.

Journal ArticleDOI
01 Mar 2016-Brain
TL;DR: It is shown that microglial proliferation in Alzheimer’s disease tissue correlates with overactivation of the colony-stimulating factor 1 receptor (CSF1R) pathway, which arrests microglia proliferation and activation in a mouse model of Alzheimer's disease and slows disease progression.
Abstract: The proliferation and activation of microglial cells is a hallmark of several neurodegenerative conditions. This mechanism is regulated by the activation of the colony-stimulating factor 1 receptor (CSF1R), thus providing a target that may prevent the progression of conditions such as Alzheimer's disease. However, the study of microglial proliferation in Alzheimer's disease and validation of the efficacy of CSF1R-inhibiting strategies have not yet been reported. In this study we found increased proliferation of microglial cells in human Alzheimer's disease, in line with an increased upregulation of the CSF1R-dependent pro-mitogenic cascade, correlating with disease severity. Using a transgenic model of Alzheimer's-like pathology (APPswe, PSEN1dE9; APP/PS1 mice) we define a CSF1R-dependent progressive increase in microglial proliferation, in the proximity of amyloid-β plaques. Prolonged inhibition of CSF1R in APP/PS1 mice by an orally available tyrosine kinase inhibitor (GW2580) resulted in the blockade of microglial proliferation and the shifting of the microglial inflammatory profile to an anti-inflammatory phenotype. Pharmacological targeting of CSF1R in APP/PS1 mice resulted in an improved performance in memory and behavioural tasks and a prevention of synaptic degeneration, although these changes were not correlated with a change in the number of amyloid-β plaques. Our results provide the first proof of the efficacy of CSF1R inhibition in models of Alzheimer's disease, and validate the application of a therapeutic strategy aimed at modifying CSF1R activation as a promising approach to tackle microglial activation and the progression of Alzheimer's disease.

Journal ArticleDOI
21 Jul 2016-Nature
TL;DR: This paper identified 15 bright spots and 35 dark spots among more than 2,500 reefs worldwide and developed a Bayesian hierarchical model to generate expectations of how standing stocks of reef fish biomass are related to 18 socioeconomic drivers and environmental conditions.
Abstract: Ongoing declines in the structure and function of the world’s coral reefs require novel approaches to sustain these ecosystems and the millions of people who depend on them3. A presently unexplored approach that draws on theory and practice in human health and rural development is to systematically identify and learn from the ‘outliers’—places where ecosystems are substantially better (‘bright spots’) or worse (‘dark spots’) than expected, given the environmental conditions and socioeconomic drivers they are exposed to. Here we compile data from more than 2,500 reefs worldwide and develop a Bayesian hierarchical model to generate expectations of how standing stocks of reef fish biomass are related to 18 socioeconomic drivers and environmental conditions. We identify 15 bright spots and 35 dark spots among our global survey of coral reefs, defined as sites that have biomass levels more than two standard deviations from expectations. Importantly, bright spots are not simply comprised of remote areas with low fishing pressure; they include localities where human populations and use of ecosystem resources is high, potentially providing insights into how communities have successfully confronted strong drivers of change. Conversely, dark spots are not necessarily the sites with the lowest absolute biomass and even include some remote, uninhabited locations often considered near pristine6. We surveyed local experts about social, institutional, and environmental conditions at these sites to reveal that bright spots are characterized by strong sociocultural institutions such as customary taboos and marine tenure, high levels of local engagement in management, high dependence on marine resources, and beneficial environmental conditions such as deep-water refuges. Alternatively, dark spots are characterized by intensive capture and storage technology and a recent history of environmental shocks. Our results suggest that investments in strengthening fisheries governance, particularly aspects such as participation and property rights, could facilitate innovative conservation actions that help communities defy expectations of global reef degradation.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2812 moreInstitutions (207)
TL;DR: In this paper, an independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b tagging algorithm used in the online trigger are also presented.
Abstract: The identification of jets containing b hadrons is important for the physics programme of the ATLAS experiment at the Large Hadron Collider. Several algorithms to identify jets containing b hadrons are described, ranging from those based on the reconstruction of an inclusive secondary vertex or the presence of tracks with large impact parameters to combined tagging algorithms making use of multi-variate discriminants. An independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b-tagging algorithm used in the online trigger are also presented. The b-jet tagging efficiency, the c-jet tagging efficiency and the mistag rate for light flavour jets in data have been measured with a number of complementary methods. The calibration results are presented as scale factors defined as the ratio of the efficiency (or mistag rate) in data to that in simulation. In the case of b jets, where more than one calibration method exists, the results from the various analyses have been combined taking into account the statistical correlation as well as the correlation of the sources of systematic uncertainty.

Journal ArticleDOI
TL;DR: A pivotal conclusion is reached that by carefully designing target data rates and power allocation coefficients of users, NOMA can outperform conventional orthogonal multiple access in underlay CR networks.
Abstract: In this paper, nonorthogonal multiple access (NOMA) is applied to large-scale underlay cognitive radio (CR) networks with randomly deployed users. To characterize the performance of the considered network, new closed-form expressions of the outage probability are derived using stochastic geometry. More importantly, by carrying out the diversity analysis, new insights are obtained under the two scenarios with different power constraints: 1) fixed transmit power of the primary transmitters (PTs); and 2) transmit power of the PTs being proportional to that of the secondary base station. For the first scenario, a diversity order of m is experienced at the mth-ordered NOMA user. For the second scenario, there is an asymptotic error floor for the outage probability. Simulation results are provided to verify the accuracy of the derived results. A pivotal conclusion is reached that by carefully designing target data rates and power allocation coefficients of users, NOMA can outperform conventional orthogonal multiple access in underlay CR networks.

Journal ArticleDOI
TL;DR: The results demonstrate that NOMA can achieve superior performance compared to the traditional orthogonal multiple access (OMA) and the derived expressions for the outage probability and the average sum rate match well with the Monte Carlo simulations.
Abstract: In this paper, a downlink single-cell non-orthogonal multiple access (NOMA) network with uniformly deployed users is considered and an analytical framework to evaluate its performance is developed. Particularly, the performance of NOMA is studied by assuming two types of partial channel state information (CSI). For the first one, which is based on imperfect CSI , we present a simple closed-form approximation for the outage probability and the average sum rate, as well as their high signal-to-noise ratio (SNR) expressions. For the second type of CSI, which is based on second order statistics (SOS) , we derive a closed-form expression for the outage probability and an approximate expression for the average sum rate for the special case two users. For the addressed scenario with the two types of partial CSI, the results demonstrate that NOMA can achieve superior performance compared to the traditional orthogonal multiple access (OMA). Moreover, SOS-based NOMA always achieves better performance than that with imperfect CSI, while it can achieve similar performance to the NOMA with perfect CSI at the low SNR region. The provided numerical results confirm that the derived expressions for the outage probability and the average sum rate match well with the Monte Carlo simulations.

Journal ArticleDOI
TL;DR: It is suggested that social theories of practice provide an alternative paradigm to both approaches to psychological understandings and individualistic theories of human behaviour and behaviour change, informing significantly new ways of conceptualising and responding to some of the most pressing contemporary challenges in public health.
Abstract: Psychological understandings and individualistic theories of human behaviour and behaviour change have dominated both academic research and interventions at the ‘coalface’ of public health. Meanwhile, efforts to understand persistent inequalities in health point to structural factors, but fail to show exactly how these translate into the daily lives (and hence health) of different sectors of the population. In this paper, we suggest that social theories of practice provide an alternative paradigm to both approaches, informing significantly new ways of conceptualising and responding to some of the most pressing contemporary challenges in public health. We introduce and discuss the relevance of such an approach with reference to tobacco smoking, focusing on the life course of smoking as a practice, rather than on the characteristics of individual smokers or on broad social determinants of health. This move forces us to consider the material and symbolic elements of which smoking is comprised, and to follow ...

Journal ArticleDOI
01 Sep 2016-Nature
TL;DR: In this paper, the authors show that plant species diversity decreased when a greater number of limiting nutrients were added across 45 grassland sites from a multi-continent experimental network, even after controlling for effects of plant biomass, and even where biomass production was not nutrient-limited.
Abstract: Niche dimensionality provides a general theoretical explanation for biodiversity-more niches, defined by more limiting factors, allow for more ways that species can coexist. Because plant species compete for the same set of limiting resources, theory predicts that addition of a limiting resource eliminates potential trade-offs, reducing the number of species that can coexist. Multiple nutrient limitation of plant production is common and therefore fertilization may reduce diversity by reducing the number or dimensionality of belowground limiting factors. At the same time, nutrient addition, by increasing biomass, should ultimately shift competition from belowground nutrients towards a one-dimensional competitive trade-off for light. Here we show that plant species diversity decreased when a greater number of limiting nutrients were added across 45 grassland sites from a multi-continent experimental network. The number of added nutrients predicted diversity loss, even after controlling for effects of plant biomass, and even where biomass production was not nutrient-limited. We found that elevated resource supply reduced niche dimensionality and diversity and increased both productivity and compositional turnover. Our results point to the importance of understanding dimensionality in ecological systems that are undergoing diversity loss in response to multiple global change factors.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, Ovsat Abdinov4  +2862 moreInstitutions (191)
TL;DR: The methods employed in the ATLAS experiment to correct for the impact of pile-up on jet energy and jet shapes, and for the presence of spurious additional jets, are described, with a primary focus on the large 20.3 kg-1 data sample.
Abstract: The large rate of multiple simultaneous protonproton interactions, or pile-up, generated by the Large Hadron Collider in Run 1 required the development of many new techniques to mitigate the advers ...

Journal ArticleDOI
28 Jul 2016-BMJ
TL;DR: Women with CIN have a higher baseline risk for prematurity and excisional and ablative treatment further increases that risk, and the frequency and severity of adverse sequelae increases with increasing cone depth and is higher for excision than for ablation.
Abstract: Objective To assess the effect of treatment for cervical intraepithelial neoplasia (CIN) on obstetric outcomes and to correlate this with cone depth and comparison group used. Design Systematic review and meta-analysis. Data sources CENTRAL, Medline, Embase from 1948 to April 2016 were searched for studies assessing obstetric outcomes in women with or without previous local cervical treatment. Data extraction and synthesis Independent reviewers extracted the data and performed quality assessment using the Newcastle-Ottawa criteria. Studies were classified according to method and obstetric endpoint. Pooled risk ratios were calculated with a random effect model and inverse variance. Heterogeneity between studies was assessed with I2 statistics. Main outcome measures Obstetric outcomes comprised preterm birth (including spontaneous and threatened), premature rupture of the membranes, chorioamnionitis, mode of delivery, length of labour, induction of delivery, oxytocin use, haemorrhage, analgesia, cervical cerclage, and cervical stenosis. Neonatal outcomes comprised low birth weight, admission to neonatal intensive care, stillbirth, APGAR scores, and perinatal mortality. Results 71 studies were included (6 338 982 participants: 65 082 treated/6 292 563 untreated). Treatment significantly increased the risk of overall ( Conclusions Women with CIN have a higher baseline risk for prematurity. Excisional and ablative treatment further increases that risk. The frequency and severity of adverse sequelae increases with increasing cone depth and is higher for excision than for ablation.

01 Apr 2016
TL;DR: In this article, a Monte Carlo approach is proposed to improve the accuracy of SfM-based DEMs and minimise the associated field effort by robust determination of suitable lower-density deployments of ground control.
Abstract: Structure-from-motion (SfM) algorithms greatly facilitate the production of detailed topographic models from photographs collected using unmanned aerial vehicles (UAVs). However, the survey quality achieved in published geomorphological studies is highly variable, and sufficient processing details are never provided to understand fully the causes of variability. To address this, we show how survey quality and consistency can be improved through a deeper consideration of the underlying photogrammetric methods. We demonstrate the sensitivity of digital elevation models (DEMs) to processing settings that have not been discussed in the geomorphological literature, yet are a critical part of survey georeferencing, and are responsible for balancing the contributions of tie and control points. We provide a Monte Carlo approach to enable geomorphologists to (1) carefully consider sources of survey error and hence increase the accuracy of SfM-based DEMs and (2) minimise the associated field effort by robust determination of suitable lower-density deployments of ground control. By identifying appropriate processing settings and highlighting photogrammetric issues such as over-parameterisation during camera self-calibration, processing artefacts are reduced and the spatial variability of error minimised. We demonstrate such DEM improvements with a commonly-used SfM-based software (PhotoScan), which we augment with semi-automated and automated identification of ground control points (GCPs) in images, and apply to two contrasting case studies — an erosion gully survey (Taroudant, Morocco) and an active landslide survey (Super-Sauze, France). In the gully survey, refined processing settings eliminated step-like artefacts of up to ~ 50 mm in amplitude, and overall DEM variability with GCP selection improved from 37 to 16 mm. In the much more challenging landslide case study, our processing halved planimetric error to ~ 0.1 m, effectively doubling the frequency at which changes in landslide velocity could be detected. In both case studies, the Monte Carlo approach provided a robust demonstration that field effort could by substantially reduced by only deploying approximately half the number of GCPs, with minimal effect on the survey quality. To reduce processing artefacts and promote confidence in SfM-based geomorphological surveys, published results should include processing details which include the image residuals for both tie points and GCPs, and ensure that these are considered appropriately within the workflow.

Proceedings ArticleDOI
15 Feb 2016
TL;DR: A performance and capability analysis of a currently available LoRa transceiver is presented and it is demonstrated how unique features such as concurrent non-destructive transmissions and carrier detection can be employed in a wide-area application scenario.
Abstract: New transceiver technologies have emerged which enable power efficient communication over very long distances. Examples of such Low-Power Wide-Area Network (LPWAN) technologies are LoRa, Sigfox and Weightless. A typical application scenario for these technologies is city wide meter reading collection where devices send readings at very low frequency over a long distance to a data concentrator (one-hop networks). We argue that these transceivers are potentially very useful to construct more generic Internet of Things (IoT) networks incorporating multi-hop bidirectional communication enabling sensing and actuation. Furthermore, these transceivers have interesting features not available with more traditional transceivers used for IoT networks which enable construction of novel protocol elements. In this paper we present a performance and capability analysis of a currently available LoRa transceiver. We describe its features and then demonstrate how such transceiver can be put to use efficiently in a wide-area application scenario. In particular we demonstrate how unique features such as concurrent non-destructive transmissions and carrier detection can be employed. Our deployment experiment demonstrates that 6 LoRa nodes can form a network covering 1.5 ha in a built up environment, achieving a potential lifetime of 2 year on 2 AA batteries and delivering data within 5 s and reliability of 80%.

Proceedings ArticleDOI
24 Oct 2016
TL;DR: TarGuess, a framework that systematically characterizes typical targeted guessing scenarios with seven sound mathematical models, each of which is based on varied kinds of data available to an attacker, is proposed to design novel and efficient guessing algorithms.
Abstract: While trawling online/offline password guessing has been intensively studied, only a few studies have examined targeted online guessing, where an attacker guesses a specific victim's password for a service, by exploiting the victim's personal information such as one sister password leaked from her another account and some personally identifiable information (PII). A key challenge for targeted online guessing is to choose the most effective password candidates, while the number of guess attempts allowed by a server's lockout or throttling mechanisms is typically very small. We propose TarGuess, a framework that systematically characterizes typical targeted guessing scenarios with seven sound mathematical models, each of which is based on varied kinds of data available to an attacker. These models allow us to design novel and efficient guessing algorithms. Extensive experiments on 10 large real-world password datasets show the effectiveness of TarGuess. Particularly, TarGuess I~IV capture the four most representative scenarios and within 100 guesses: (1) TarGuess-I outperforms its foremost counterpart by 142% against security-savvy users and by 46% against normal users; (2) TarGuess-II outperforms its foremost counterpart by 169% on security-savvy users and by 72% against normal users; and (3) Both TarGuess-III and IV gain success rates over 73% against normal users and over 32% against security-savvy users. TarGuess-III and IV, for the first time, address the issue of cross-site online guessing when given the victim's one sister password and some PII.