Showing papers by "University of Coimbra published in 2016"
••
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes.
For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy.
Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.
5,187 citations
••
James Bentham1, Mariachiara Di Cesare1, Mariachiara Di Cesare2, Gretchen A Stevens3 +787 more•Institutions (246)
TL;DR: The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.
Abstract: Being taller is associated with enhanced longevity, and higher education and earnings. We reanalysed 1472 population-based studies, with measurement of height on more than 18.6 million participants to estimate mean height for people born between 1896 and 1996 in 200 countries. The largest gain in adult height over the past century has occurred in South Korean women and Iranian men, who became 20.2 cm (95% credible interval 17.5–22.7) and 16.5 cm (13.3–19.7) taller, respectively. In contrast, there was little change in adult height in some sub-Saharan African countries and in South Asia over the century of analysis. The tallest people over these 100 years are men born in the Netherlands in the last quarter of 20th century, whose average heights surpassed 182.5 cm, and the shortest were women born in Guatemala in 1896 (140.3 cm; 135.8–144.8). The height differential between the tallest and shortest populations was 19-20 cm a century ago, and has remained the same for women and increased for men a century later despite substantial changes in the ranking of countries.
1,348 citations
••
TL;DR: The use of nanoparticle (NP) formulations able to encapsulate molecules with therapeutic value, while targeting specific transport processes in the brain vasculature, may enhance drug transport through the BBB in neurodegenerative/ischemic disorders and target relevant regions in thebrain for regenerative processes.
955 citations
••
Max Planck Society1, University of Tübingen2, Howard Hughes Medical Institute3, Harvard University4, Broad Institute5, University College Dublin6, University of Coimbra7, University of Adelaide8, Altai State University9, Russian Academy of Sciences10, University of Pisa11, University of Bari12, University of Cantabria13, University of New Mexico14, Austrian Academy of Sciences15, Naturhistorisches Museum16, University of Vienna17, University of Ferrara18, University of Florence19, University of Siena20, Centre national de la recherche scientifique21, University of Bucharest22, California State University, Northridge23, University of Bordeaux24, University of Toulouse25, Royal Belgian Institute of Natural Sciences26, Masaryk University27, Academy of Sciences of the Czech Republic28
TL;DR: In this article, the authors analyse genome-wide data from 51 Eurasians from ~45,000-7,000 years ago and find that the proportion of Neanderthal DNA decreased from 3-6% to around 2%, consistent with natural selection against Neanderthal variants in modern humans.
Abstract: Modern humans arrived in Europe ~45,000 years ago, but little is known about their genetic composition before the start of farming ~8,500 years ago. Here we analyse genome-wide data from 51 Eurasians from ~45,000-7,000 years ago. Over this time, the proportion of Neanderthal DNA decreased from 3-6% to around 2%, consistent with natural selection against Neanderthal variants in modern humans. Whereas there is no evidence of the earliest modern humans in Europe contributing to the genetic composition of present-day Europeans, all individuals between ~37,000 and ~14,000 years ago descended from a single founder population which forms part of the ancestry of present-day Europeans. An ~35,000-year-old individual from northwest Europe represents an early branch of this founder population which was then displaced across a broad region, before reappearing in southwest Europe at the height of the last Ice Age ~19,000 years ago. During the major warming period after ~14,000 years ago, a genetic component related to present-day Near Easterners became widespread in Europe. These results document how population turnover and migration have been recurring themes of European prehistory.
702 citations
••
Broad Institute1, Whitman College2, Simon Fraser University3, Howard Hughes Medical Institute4, University of Coimbra5, University College Dublin6, Emory University7, Chinese Academy of Sciences8, University of Ferrara9, University of Miskolc10, Armenian National Academy of Sciences11, University of Pennsylvania12, University of Winnipeg13, Alexandru Ioan Cuza University14, University of Edinburgh15, Royal College of Surgeons in Ireland16, Spanish National Research Council17, Imperial College London18, Max Planck Society19, Binghamton University20, University of Huddersfield21, University of Pavia22, Yerevan State University23
TL;DR: This paper reported genome-wide ancient DNA from 44 ancient Near Easterners ranging in time between ~12,000 and 1,400 bc, from Natufian hunter-gatherers to Bronze Age farmers, showing that the earliest populations of the Near East derived around half their ancestry from a 'Basal Eurasian' lineage that had little if any Neanderthal admixture and that separated from other non-African lineages before their separation from each other.
Abstract: We report genome-wide ancient DNA from 44 ancient Near Easterners ranging in time between ~12,000 and 1,400 bc, from Natufian hunter–gatherers to Bronze Age farmers. We show that the earliest populations of the Near East derived around half their ancestry from a ‘Basal Eurasian’ lineage that had little if any Neanderthal admixture and that separated from other non-African lineages before their separation from each other. The first farmers of the southern Levant (Israel and Jordan) and Zagros Mountains (Iran) were strongly genetically differentiated, and each descended from local hunter–gatherers. By the time of the Bronze Age, these two populations and Anatolian-related farmers had mixed with each other and with the hunter–gatherers of Europe to greatly reduce genetic differentiation. The impact of the Near Eastern farmers extended beyond the Near East: farmers related to those of Anatolia spread westward into Europe; farmers related to those of the Levant spread southward into East Africa; farmers related to those of Iran spread northward into the Eurasian steppe; and people related to both the early farmers of Iran and to the pastoralists of the Eurasian steppe spread eastward into South Asia.
695 citations
••
French Alternative Energies and Atomic Energy Commission1, Université catholique de Louvain2, Confederation College3, University of Liège4, University of Grenoble5, Université de Montréal6, Universidad de Bogotá Jorge Tadeo Lozano7, Rutgers University8, Chinese Academy of Sciences9, Central University of Finance and Economics10, Josip Juraj Strossmayer University of Osijek11, University of Coimbra12, University of the Basque Country13, University of West Virginia14, McMaster University15, Dalhousie University16
TL;DR: The present paper aims to describe the new capabilities of ABINIT that have been developed since 2009, which include new physical and technical features that allow electronic structure calculations impossible to carry out in the previous versions.
639 citations
••
Aix-Marseille University1, University of Oklahoma2, University of Iowa3, Azerbaijan National Academy of Sciences4, Université Paris-Saclay5, University of Amsterdam6, University of California, Santa Cruz7, University of Sussex8, Tel Aviv University9, Technion – Israel Institute of Technology10, University of Oregon11, Stockholm University12, King's College London13, International Centre for Theoretical Physics14, AGH University of Science and Technology15, Brookhaven National Laboratory16, Northern Illinois University17, Ludwig Maximilian University of Munich18, Rutherford Appleton Laboratory19, University of Liverpool20, University of Belgrade21, University of Göttingen22, University of Granada23, Boston University24, Joint Institute for Nuclear Research25, University of Rome Tor Vergata26, Lund University27, University of Bologna28, University of Victoria29, University of Grenoble30, National University of La Plata31, CERN32, National Technical University of Athens33, University of Salento34, University of Chicago35, Columbia University36, University of Birmingham37, University of Naples Federico II38, University of Copenhagen39, University of Washington40, University of Valencia41, Lawrence Berkeley National Laboratory42, Federal University of Rio de Janeiro43, Brandeis University44, University of Michigan45, University of Coimbra46, University of Lisbon47, University of Sheffield48, University of Geneva49, University of Texas at Austin50, Heidelberg University51, University of Milan52, National and Kapodistrian University of Athens53, Dresden University of Technology54, Novosibirsk State University55, IFAE56
TL;DR: In this article, a combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and fermions, are presented.
Abstract: Combined ATLAS and CMS measurements of the Higgs boson production and decay rates, as well as constraints on its couplings to vector bosons and fermions, are presented. The combination is based on the analysis of five production processes, namely gluon fusion, vector boson fusion, and associated production with a $W$ or a $Z$ boson or a pair of top quarks, and of the six decay modes $H \to ZZ, WW$, $\gamma\gamma, \tau\tau, bb$, and $\mu\mu$. All results are reported assuming a value of 125.09 GeV for the Higgs boson mass, the result of the combined measurement by the ATLAS and CMS experiments. The analysis uses the CERN LHC proton--proton collision data recorded by the ATLAS and CMS experiments in 2011 and 2012, corresponding to integrated luminosities per experiment of approximately 5 fb$^{-1}$ at $\sqrt{s}=7$ TeV and 20 fb$^{-1}$ at $\sqrt{s} = 8$ TeV. The Higgs boson production and decay rates measured by the two experiments are combined within the context of three generic parameterisations: two based on cross sections and branching fractions, and one on ratios of coupling modifiers. Several interpretations of the measurements with more model-dependent parameterisations are also given. The combined signal yield relative to the Standard Model prediction is measured to be 1.09 $\pm$ 0.11. The combined measurements lead to observed significances for the vector boson fusion production process and for the $H \to \tau\tau$ decay of $5.4$ and $5.5$ standard deviations, respectively. The data are consistent with the Standard Model predictions for all parameterisations considered.
618 citations
••
Columbia University1, University of Amsterdam2, University of Bologna3, University of Mainz4, University of Coimbra5, Weizmann Institute of Science6, New York University Abu Dhabi7, University of Zurich8, Stockholm University9, Rensselaer Polytechnic Institute10, Max Planck Society11, University of Münster12, University of Bern13, Purdue University14, École des mines de Nantes15, University of California, Los Angeles16, Rice University17
TL;DR: In this article, the expected sensitivity of the Xenon1T experiment to the spin-independent WIMP-nucleon interaction cross section was investigated based on Monte Carlo predictions of the electronic and nuclear recoil backgrounds.
Abstract: The XENON1T experiment is currently in the commissioning phase at the Laboratori Nazionali del Gran Sasso, Italy. In this article we study the experiment's expected sensitivity to the spin-independent WIMP-nucleon interaction cross section, based on Monte Carlo predictions of the electronic and nuclear recoil backgrounds. The total electronic recoil background in 1 tonne fiducial volume and (1, 12) keV electronic recoil equivalent energy region, before applying any selection to discriminate between electronic and nuclear recoils, is (1.80 ± 0.15) · 10(−)(4) (kg·day·keV)(−)(1), mainly due to the decay of (222)Rn daughters inside the xenon target. The nuclear recoil background in the corresponding nuclear recoil equivalent energy region (4, 50) keV, is composed of (0.6 ± 0.1) (t·y)(−)(1) from radiogenic neutrons, (1.8 ± 0.3) · 10(−)(2) (t·y)(−)(1) from coherent scattering of neutrinos, and less than 0.01 (t·y)(−)(1) from muon-induced neutrons. The sensitivity of XENON1T is calculated with the Profile Likelihood Ratio method, after converting the deposited energy of electronic and nuclear recoils into the scintillation and ionization signals seen in the detector. We take into account the systematic uncertainties on the photon and electron emission model, and on the estimation of the backgrounds, treated as nuisance parameters. The main contribution comes from the relative scintillation efficiency Script L(eff), which affects both the signal from WIMPs and the nuclear recoil backgrounds. After a 2 y measurement in 1 t fiducial volume, the sensitivity reaches a minimum cross section of 1.6 · 10(−)(47) cm(2) at m(χ) = 50 GeV/c(2).
580 citations
••
University of Amsterdam1, University of Bologna2, University of Mainz3, University of Coimbra4, University of Bern5, Columbia University6, Weizmann Institute of Science7, New York University Abu Dhabi8, University of Zurich9, Rensselaer Polytechnic Institute10, Max Planck Society11, Stockholm University12, University of Nantes13, Karlsruhe Institute of Technology14, University of Münster15, University of Chicago16, Arizona State University17, Purdue University18, Rice University19, University of California, San Diego20, University of Freiburg21, Dresden University of Technology22, Imperial College London23, University of California, Los Angeles24
TL;DR: DARk matter WImp search with liquid xenoN (DARWIN) as mentioned in this paper is an experiment for the direct detection of dark matter using a multi-ton liquid xenon time projection chamber at its core.
Abstract: DARk matter WImp search with liquid xenoN (DARWIN(2)) will be an experiment for the direct detection of dark matter using a multi-ton liquid xenon time projection chamber at its core. Its primary g ...
553 citations
••
TL;DR: This report provides national estimates of levels and trends of HIV/AIDS incidence, prevalence, coverage of antiretroviral therapy (ART), and mortality for 195 countries and territories from 1980 to 2015.
522 citations
••
TL;DR: The CRESST-II experiment uses cryogenic detectors to search for nuclear recoil events induced by the elastic scattering of dark matter particles in CaWO======$_4$$676 € 2 Â
Abstract: The CRESST-II experiment uses cryogenic detectors to search for nuclear recoil events induced by the elastic scattering of dark matter particles in CaWO
$$_4$$
crystals. Given the low energy threshold of our detectors in combination with light target nuclei, low mass dark matter particles can be probed with high sensitivity. In this letter we present the results from data of a single detector module corresponding to 52 kg live days. A blind analysis is carried out. With an energy threshold for nuclear recoils of 307 eV we substantially enhance the sensitivity for light dark matter. Thereby, we extend the reach of direct dark matter experiments to the sub- GeV/
$$c^2$$
region and demonstrate that the energy threshold is the key parameter in the search for low mass dark matter particles.
••
Case Western Reserve University1, Imperial College London2, South Dakota School of Mines and Technology3, University of Maryland, College Park4, University of Edinburgh5, Yale University6, Lawrence Livermore National Laboratory7, University of California, Santa Barbara8, Brown University9, University of South Dakota10, University of California, Davis11, University of Coimbra12, Lawrence Berkeley National Laboratory13, University College London14, University of Rochester15, University of California, Berkeley16, SLAC National Accelerator Laboratory17, Texas A&M University18, State University of New York System19
TL;DR: This new analysis incorporates several advances: single-photon calibration at the scintillation wavelength, improved event-reconstruction algorithms, a revised background model including events originating on the detector walls in an enlarged fiducial volume, and new calibrations from decays of an injected tritium β source and from kinematically constrained nuclear recoils down to 1.1 keV.
Abstract: We present constraints on weakly interacting massive particles (WIMP)-nucleus scattering from the 2013 data of the Large Underground Xenon dark matter experiment, including 1.4×10^{4} kg day of search exposure. This new analysis incorporates several advances: single-photon calibration at the scintillation wavelength, improved event-reconstruction algorithms, a revised background model including events originating on the detector walls in an enlarged fiducial volume, and new calibrations from decays of an injected tritium β source and from kinematically constrained nuclear recoils down to 1.1 keV. Sensitivity, especially to low-mass WIMPs, is enhanced compared to our previous results which modeled the signal only above a 3 keV minimum energy. Under standard dark matter halo assumptions and in the mass range above 4 GeV c^{-2}, these new results give the most stringent direct limits on the spin-independent WIMP-nucleon cross section. The 90% C.L. upper limit has a minimum of 0.6 zb at 33 GeV c^{-2} WIMP mass.
••
TL;DR: The present work aims to review the progress of recent research on the isolation, identification and diversity of metal resistant endophytic bacteria and illustrate various mechanisms responsible for plant growth promotion and heavy metal detoxification/phytoaccumulation/translocation in plants.
••
TL;DR: Aerogels are an exceptional class of porous material with a number of excellent physicochemical properties such as low density, high porosity, high surface area and adjustable surface chemistry as discussed by the authors.
••
TL;DR: In this article, the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015 was evaluated using the Monte Carlo simulations.
Abstract: This article documents the performance of the ATLAS muon identification and reconstruction using the first LHC dataset recorded at s√ = 13 TeV in 2015. Using a large sample of J/ψ→μμ and Z→μμ decays from 3.2 fb−1 of pp collision data, measurements of the reconstruction efficiency, as well as of the momentum scale and resolution, are presented and compared to Monte Carlo simulations. The reconstruction efficiency is measured to be close to 99% over most of the covered phase space (|η| 2.2, the pT resolution for muons from Z→μμ decays is 2.9% while the precision of the momentum scale for low-pT muons from J/ψ→μμ decays is about 0.2%.
••
TL;DR: This Review gives emphasis to the nonlinear optical properties of photoactive materials for the function of optical power limiting and describes the known mechanisms of optical limiting for the different types of materials.
Abstract: The control of luminous radiation has extremely important implications for modern and future technologies as well as in medicine. In this Review, we detail chemical structures and their relevant photophysical features for various groups of materials, including organic dyes such as metalloporphyrins and metallophthalocyanines (and derivatives), other common organic materials, mixed metal complexes and clusters, fullerenes, dendrimeric nanocomposites, polymeric materials (organic and/or inorganic), inorganic semiconductors, and other nanoscopic materials, utilized or potentially useful for the realization of devices able to filter in a smart way an external radiation. The concept of smart is referred to the characteristic of those materials that are capable to filter the radiation in a dynamic way without the need of an ancillary system for the activation of the required transmission change. In particular, this Review gives emphasis to the nonlinear optical properties of photoactive materials for the functi...
••
University of Coimbra1, University of Aberdeen2, Netherlands Cancer Institute3, University of Rennes4, University of Texas at Austin5, Charles University in Prague6, Hannover Medical School7, Radboud University Nijmegen8, St Bartholomew's Hospital9, Ludwig Maximilian University of Munich10, Umeå University11, University of Eastern Piedmont12
TL;DR: The results suggest that RTB has good accuracy in diagnosing renal cancer and its subtypes, and it appears to be safe, but better quality studies are required to provide a more definitive answer.
••
TL;DR: In this paper, an independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b tagging algorithm used in the online trigger are also presented.
Abstract: The identification of jets containing b hadrons is important for the physics programme of the ATLAS experiment at the Large Hadron Collider. Several algorithms to identify jets containing b hadrons are described, ranging from those based on the reconstruction of an inclusive secondary vertex or the presence of tracks with large impact parameters to combined tagging algorithms making use of multi-variate discriminants. An independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b-tagging algorithm used in the online trigger are also presented. The b-jet tagging efficiency, the c-jet tagging efficiency and the mistag rate for light flavour jets in data have been measured with a number of complementary methods. The calibration results are presented as scale factors defined as the ratio of the efficiency (or mistag rate) in data to that in simulation. In the case of b jets, where more than one calibration method exists, the results from the various analyses have been combined taking into account the statistical correlation as well as the correlation of the sources of systematic uncertainty.
••
TL;DR: The adenosine modulation system mostly operates through inhibitory A1 (A1R) and facilitatory A2A receptors (A2AR) in the brain, and simultaneously bolstering A1R preconditioning and preventing excessive A2AR function might afford maximal neuroprotection.
Abstract: The adenosine modulation system mostly operates through inhibitory A1 (A1 R) and facilitatory A2A receptors (A2A R) in the brain. The activity-dependent release of adenosine acts as a brake of excitatory transmission through A1 R, which are enriched in glutamatergic terminals. Adenosine sharpens salience of information encoding in neuronal circuits: high-frequency stimulation triggers ATP release in the 'activated' synapse, which is locally converted by ecto-nucleotidases into adenosine to selectively activate A2A R; A2A R switch off A1 R and CB1 receptors, bolster glutamate release and NMDA receptors to assist increasing synaptic plasticity in the 'activated' synapse; the parallel engagement of the astrocytic syncytium releases adenosine further inhibiting neighboring synapses, thus sharpening the encoded plastic change. Brain insults trigger a large outflow of adenosine and ATP, as a danger signal. A1 R are a hurdle for damage initiation, but they desensitize upon prolonged activation. However, if the insult is near-threshold and/or of short-duration, A1 R trigger preconditioning, which may limit the spread of damage. Brain insults also up-regulate A2A R, probably to bolster adaptive changes, but this heightens brain damage since A2A R blockade affords neuroprotection in models of epilepsy, depression, Alzheimer's, or Parkinson's disease. This initially involves a control of synaptotoxicity by neuronal A2A R, whereas astrocytic and microglia A2A R might control the spread of damage. The A2A R signaling mechanisms are largely unknown since A2A R are pleiotropic, coupling to different G proteins and non-canonical pathways to control the viability of glutamatergic synapses, neuroinflammation, mitochondria function, and cytoskeleton dynamics. Thus, simultaneously bolstering A1 R preconditioning and preventing excessive A2A R function might afford maximal neuroprotection. The main physiological role of the adenosine modulation system is to sharp the salience of information encoding through a combined action of adenosine A2A receptors (A2A R) in the synapse undergoing an alteration of synaptic efficiency with an increased inhibitory action of A1 R in all surrounding synapses. Brain insults trigger an up-regulation of A2A R in an attempt to bolster adaptive plasticity together with adenosine release and A1 R desensitization; this favors synaptotocity (increased A2A R) and decreases the hurdle to undergo degeneration (decreased A1 R). Maximal neuroprotection is expected to result from a combined A2A R blockade and increased A1 R activation. This article is part of a mini review series: "Synaptic Function and Dysfunction in Brain Diseases".
••
TL;DR: The methods employed in the ATLAS experiment to correct for the impact of pile-up on jet energy and jet shapes, and for the presence of spurious additional jets, are described, with a primary focus on the large 20.3 kg-1 data sample.
Abstract: The large rate of multiple simultaneous protonproton interactions, or pile-up, generated by the Large Hadron Collider in Run 1 required the development of many new techniques to mitigate the advers ...
••
TL;DR: It is shown that re-expression of the Shank3 gene in adult mice led to improvements in synaptic protein composition, spine density and neural function in the striatum, and a certain degree of continued plasticity in the adult diseased brain is demonstrated.
Abstract: Because autism spectrum disorders are neurodevelopmental disorders and patients typically display symptoms before the age of three, one of the key questions in autism research is whether the pathology is reversible in adults. Here we investigate the developmental requirement of Shank3 in mice, a prominent monogenic autism gene that is estimated to contribute to approximately 1% of all autism spectrum disorder cases. SHANK3 is a postsynaptic scaffold protein that regulates synaptic development, function and plasticity by orchestrating the assembly of postsynaptic density macromolecular signalling complex. Disruptions of the Shank3 gene in mouse models have resulted in synaptic defects and autistic-like behaviours including anxiety, social interaction deficits, and repetitive behaviour. We generated a novel Shank3 conditional knock-in mouse model, and show that re-expression of the Shank3 gene in adult mice led to improvements in synaptic protein composition, spine density and neural function in the striatum. We also provide behavioural evidence that certain behavioural abnormalities including social interaction deficit and repetitive grooming behaviour could be rescued, while anxiety and motor coordination deficit could not be recovered in adulthood. Together, these results reveal the profound effect of post-developmental activation of Shank3 expression on neural function, and demonstrate a certain degree of continued plasticity in the adult diseased brain.
••
TL;DR: This review presents the recent advances and applications made hitherto in understanding the biochemical and molecular mechanisms of plant–microbe interactions and their role in the major processes involved in phytoremediation, such as heavy metal detoxification, mobilization, immobilization, transformation, transport, and distribution.
Abstract: Plants and microbes coexist or compete for survival and their cohesive interactions play a vital role in adapting to metalliferous environments, and can thus be explored to improve microbe-assisted phytoremediation. Plant root exudates are useful nutrient and energy sources for soil microorganisms, with whom they establish intricate communication systems. Some beneficial bacteria and fungi, acting as plant growth promoting microorganisms (PGPMs), may alleviate metal phytotoxicity and stimulate plant growth indirectly via the induction of defense mechanisms against phytopathogens, and/or directly through the solubilization of mineral nutrients (nitrogen, phosphate, potassium, iron, etc.), production of plant growth promoting substances (e.g., phytohormones), and secretion of specific enzymes (e.g., 1-aminocyclopropane-1-carboxylate deaminase). PGPM can also change metal bioavailability in soil through various mechanisms such as acidification, precipitation, chelation, complexation, and redox reactions. This review presents the recent advances and applications made hitherto in understanding the biochemical and molecular mechanisms of plant-microbe interactions and their role in the major processes involved in phytoremediation, such as heavy metal detoxification, mobilization, immobilization, transformation, transport, and distribution.
••
TL;DR: SNO+ is a large liquid scintillator-based experiment located 2 km underground at SNOLAB, Sudbury, Canada as mentioned in this paper, whose primary goal is a search for the neutrinoless double-beta decay (0BB) of 130Te.
Abstract: SNO+is a large liquid scintillator-based experiment located 2 km underground at SNOLAB, Sudbury,Canada. It reuses the Sudbury Neutrino Observatory detector, consisting of a 12m diameter acrylic vessel which will be filled with about 780 tonnes of ultra-pure liquid scintillator. Designed as a multipurpose neutrino experiment, the primary goal of SNO+ is a search for the neutrinoless double-beta decay (0BB) of 130Te. In Phase I, the detector will be loaded with 0.3% natural tellurium, corresponding to nearly 800 kg of 130Te, with an expected effective Majorana neutrino mass sensitivity in the region of 55–133meV, just above the inverted mass hierarchy. Recently, the possibility of deploying up to ten times more natural tellurium has been investigated, which would enable SNO+ to achieve sensitivity deep into the parameter space for the inverted neutrino mass hierarchy in the future. Additionally, SNO+ aims to measure reactor antineutrino oscillations, low energy solar neutrinos, and geoneutrinos, to be sensitive to supernova neutrinos, and to search for exotic physics. A first phase with the detector filled with water will begin soon, with the scintillator phase expected to start after a few months of water data taking. The 01BB Phase I is foreseen for 2017.
••
TL;DR: A critical appraisal and a synthesis of the published epidemiological studies about procedural pain in neonates admitted to intensive care units to determine the frequency of painful procedures and pain management interventions as well as to identify their predictors.
••
Research Institute for Nature and Forest1, Université de Namur2, Norwegian University of Life Sciences3, National University of Singapore4, University of Helsinki5, University of Trento6, ESCP Europe7, Finnish Environment Institute8, Martin Luther University of Halle-Wittenberg9, Helmholtz Centre for Environmental Research - UFZ10, University of Coimbra11, University of Melbourne12, University of Leeds13, Earth Economics14, Universidade Nova de Lisboa15, Vrije Universiteit Brussel16, University of Waikato17, University of Southampton18, University of Queensland19, Plymouth State University20, VU University Amsterdam21, University College London22
TL;DR: In this article, the authors advocate for the adherence of a plural valuation culture and its establishment as a common practice, by contesting and complementing ineffective and discriminatory single-value approaches.
Abstract: We are increasingly confronted with severe social and economic impacts of environmental degradation all over the world. From a valuation perspective, environmental problems and conflicts originate from trade-offs between values. The urgency and importance to integrate nature's diverse values in decisions and actions stand out more than ever.
Valuation, in its broad sense of ‘assigning importance’, is inherently part of most decisions on natural resource and land use. Scholars from different traditions -while moving from heuristic interdisciplinary debate to applied transdisciplinary science- now acknowledge the need for combining multiple disciplines and methods to represent the diverse set of values of nature. This growing group of scientists and practitioners share the ambition to explore how combinations of ecological, socio-cultural and economic valuation tools can support real-life resource and land use decision-making.
The current sustainability challenges and the ineffectiveness of single-value approaches to offer relief demonstrate that continuing along a single path is no option. We advocate for the adherence of a plural valuation culture and its establishment as a common practice, by contesting and complementing ineffective and discriminatory single-value approaches. In policy and decision contexts with a willingness to improve sustainability, integrated valuation approaches can be blended in existing processes, whereas in contexts of power asymmetries or environmental conflicts, integrated valuation can promote the inclusion of diverse values through action research and support the struggle for social and environmental justice.
The special issue and this editorial synthesis paper bring together lessons from pioneer case studies and research papers, synthesizing main challenges and setting out priorities for the years to come for the field of integrated valuation.
••
TL;DR: In this article, a review of the development of energy-efficient and healthy ventilation in buildings is presented, where the influence of occupants' behaviour on the energy use and the correlation between ventilation and the occupants' health and productivity are also considered.
Abstract: Energy demand has been increasing worldwide and the building sector represents a large percentage of global energy consumption. Therefore, promoting energy efficiency in buildings is essential. Among all building services, Heating, Ventilation and Air Conditioning (HVAC) systems are significantly responsible for building energy use. In HVAC, ventilation is the key issue for providing suitable Indoor Air Quality (IAQ), while it is also responsible for energy consumption in buildings. Thus, improving ventilation systems plays an important role not only in fostering energy efficiency in buildings, but also in providing better indoor climate for the occupants and decreasing the possibility of health issues in consequence. In the last decades, many energy-efficient ventilation methods are developed by researchers to mitigate energy consumption in buildings. This paper reviews scientific research and reports, as well as building regulations and standards, which evaluated, investigated and reported the development of energy-efficient methods for ventilation in buildings. Besides energy-efficient methods such as natural and hybrid ventilation strategies, occupants’ behaviours regarding ventilation, can also affect the energy demand in buildings. Therefore, the influence of occupants’ behaviour on the energy use and the correlation between ventilation and the occupants’ health and productivity were also considered. The review showed that ventilation is interrelated with many factors such as indoor and outdoor conditions, building characteristics, building application as well as users’ behaviour. Thus, it is concluded that many factors must be taken into account for designing energy-efficient and healthy ventilation systems. Moreover, it should be mentioned that utilizing hybrid ventilation in buildings integrated with suitable control strategies, to adjust between mechanical and natural ventilation, leads to considerable energy savings while an appropriate IAQ is maintained.
••
TL;DR: The luminosity determination for the ATLAS detector at the LHC during pp collisions at s√= 8 TeV in 2012 is presented in this article, where the evaluation of the luminosity scale is performed using several luminometers.
Abstract: The luminosity determination for the ATLAS detector at the LHC during pp collisions at s√= 8 TeV in 2012 is presented. The evaluation of the luminosity scale is performed using several luminometers ...
••
TL;DR: In this article, the uncertainties in neutron star radii and crust properties due to our limited knowledge of the equation of state are quantitatively analyzed, and a large set of unified equations of state for purely nucleonic matter is obtained based on twentyfour Skyrme interactions and nine relativistic mean field nuclear parametrizations.
Abstract: The uncertainties in neutron star radii and crust properties due to our limited knowledge of the equation of state are quantitatively analyzed. We first demonstrate the importance of a unified microscopic description for the different baryonic densities of the star. If the pressure functional is obtained matching a crust and a core equation of state based on models with different properties at nuclear matter saturation, the uncertainties can be as large as $\ensuremath{\sim}30$ % for the crust thickness and 4% for the radius. Necessary conditions for causal and thermodynamically consistent matchings between the core and the crust are formulated and their consequences examined. A large set of unified equations of state for purely nucleonic matter is obtained based on twenty-four Skyrme interactions and nine relativistic mean-field nuclear parametrizations. In addition, for relativistic models fifteen equations of state including a transition to hyperonic matter at high density are presented. All these equations of state have in common the property of describing a $2{M}_{\ensuremath{\bigodot}}$ star and of being causal within stable neutron stars. Spans of $\ensuremath{\sim}3$ and $\ensuremath{\sim}4$ km are obtained for the radius of, respectively, $1.0{M}_{\ensuremath{\bigodot}}$ and $2.0{M}_{\ensuremath{\bigodot}}$ stars. Applying a set of nine further constraints from experiment and ab initio calculations the uncertainty is reduced to $\ensuremath{\sim}1$ and 2 km, respectively. These residual uncertainties reflect lack of constraints at large densities and insufficient information on the density dependence of the equation of state near the nuclear matter saturation point. The most important parameter to be constrained is shown to be the symmetry energy slope $L$. Indeed, this parameter exhibits a linear correlation with the stellar radius, which is particularly clear for small mass stars around $1.0{M}_{\ensuremath{\bigodot}}$. The other equation-of-state parameters do not show clear correlations with the radius, within the present uncertainties. Potential constraints on $L$, the neutron star radius, and the equation of state from observations of thermal states of neutron stars are also discussed. The unified equations of state are made available in the Supplemental Materials and via the CompOSE database.
••
TL;DR: It is shown that naive CD4(+) T cell activation induces a unique program of mitochondrial biogenesis and remodeling that generates specialized mitochondria with enhanced one-carbon metabolism that is critical for T-cell activation and survival.
••
TL;DR: The results suggest that the ridge in pp collisions arises from the same or similar underlying physics as observed in p+Pb collisions, and that the dynamics responsible for the ridge has no strong sqrt[s] dependence.
Abstract: ATLAS has measured two-particle correlations as a function of relative azimuthal-angle, $\Delta \phi$, and pseudorapidity, $\Delta \eta$, in $\sqrt{s}$=13 and 2.76 TeV $pp$ collisions at the LHC using charged particles measured in the pseudorapidity interval $|\eta|$<2.5. The correlation functions evaluated in different intervals of measured charged-particle multiplicity show a multiplicity-dependent enhancement at $\Delta \phi \sim 0$ that extends over a wide range of $\Delta\eta$, which has been referred to as the "ridge". Per-trigger-particle yields, $Y(\Delta \phi)$, are measured over 2<$|\Delta\eta|$<5. For both collision energies, the $Y(\Delta \phi)$ distribution in all multiplicity intervals is found to be consistent with a linear combination of the per-trigger-particle yields measured in collisions with less than 20 reconstructed tracks, and a constant combinatoric contribution modulated by $\cos{(2\Delta \phi)}$. The fitted Fourier coefficient, $v_{2,2}$, exhibits factorization, suggesting that the ridge results from per-event $\cos{(2\phi)}$ modulation of the single-particle distribution with Fourier coefficients $v_2$. The $v_2$ values are presented as a function of multiplicity and transverse momentum. They are found to be approximately constant as a function of multiplicity and to have a $p_{\mathrm{T}}$ dependence similar to that measured in $p$+Pb and Pb+Pb collisions. The $v_2$ values in the 13 and 2.76 TeV data are consistent within uncertainties. These results suggest that the ridge in $pp$ collisions arises from the same or similar underlying physics as observed in $p$+Pb collisions, and that the dynamics responsible for the ridge has no strong $\sqrt{s}$ dependence.