scispace - formally typeset
Search or ask a question

Showing papers by "University of Antwerp published in 2012"


Journal ArticleDOI
TL;DR: In this paper, results from searches for the standard model Higgs boson in proton-proton collisions at 7 and 8 TeV in the CMS experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.8 standard deviations.

8,857 citations


Journal ArticleDOI
TL;DR: These guidelines are presented for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.

4,316 citations


Journal ArticleDOI
TL;DR: The European Position Paper on Rhinosinusitis and Nasal Polyps 2012 as discussed by the authors is the update of similar evidence-based position papers published in 2005 and 2007, it contains chapters on definitions and classification, we now also proposed definitions for difficult to treat rhinositis, control of disease, and better definitions for rhinosinitis in children.
Abstract: The European Position Paper on Rhinosinusitis and Nasal Polyps 2012 is the update of similar evidence based position papers published in 2005 and 2007. The document contains chapters on definitions and classification, we now also proposed definitions for difficult to treat rhinosinusitis, control of disease and better definitions for rhinosinusitis in children. More emphasis is placed on the diagnosis and treatment of acute rhinosinusitis. Throughout the document the terms chronic rhinosinusitis without nasal polyps (CRSsNP) and chronic rhinosinusitis with nasal polyps (CRSwNP) are used to further point out differences in pathophysiology and treatment of these two entities. There are extensive chapters on epidemiology and predisposing factors, inflammatory mechanisms, (differential) diagnosis of facial pain, genetics, cystic fibrosis, aspirin exacerbated respiratory disease, immunodeficiencies, allergic fungal rhinosinusitis and the relationship between upper and lower airways. The chapters on paediatric acute and chronic rhinosinusitis are totally rewritten. Last but not least all available evidence for management of acute rhinosinusitis and chronic rhinosinusitis with or without nasal polyps in adults and children is analyzed and presented and management schemes based on the evidence are proposed. This executive summary for otorhinolaryngologists focuses on the most important changes and issues for otorhinolaryngologists. The full document can be downloaded for free on the website of this journal: http://www.rhinologyjournal.com.

1,608 citations


Journal ArticleDOI
TL;DR: Light is shed on the genetic architecture and pathophysiological mechanisms underlying BMD variation and fracture susceptibility and within the RANK-RANKL-OPG, mesenchymal stem cell differentiation, endochondral ossification and Wnt signaling pathways.
Abstract: Bone mineral density (BMD) is the most widely used predictor of fracture risk. We performed the largest meta-analysis to date on lumbar spine and femoral neck BMD, including 17 genome-wide association studies and 32,961 individuals of European and east Asian ancestry. We tested the top BMD-associated markers for replication in 50,933 independent subjects and for association with risk of low-trauma fracture in 31,016 individuals with a history of fracture (cases) and 102,444 controls. We identified 56 loci (32 new) associated with BMD at genome-wide significance (P < 5 × 10(-8)). Several of these factors cluster within the RANK-RANKL-OPG, mesenchymal stem cell differentiation, endochondral ossification and Wnt signaling pathways. However, we also discovered loci that were localized to genes not known to have a role in bone biology. Fourteen BMD-associated loci were also associated with fracture risk (P < 5 × 10(-4), Bonferroni corrected), of which six reached P < 5 × 10(-8), including at 18p11.21 (FAM210A), 7q21.3 (SLC25A13), 11q13.2 (LRP5), 4q22.1 (MEPE), 2p16.2 (SPTBN1) and 10q21.1 (DKK1). These findings shed light on the genetic architecture and pathophysiological mechanisms underlying BMD variation and fracture susceptibility.

1,076 citations


Journal ArticleDOI
TL;DR: The consequences of the presence and magnitude of different costs during different phases of the dispersal process, and their internal organisation through covariation with other life‐history traits are synthesised with respect to potential consequences for species conservation and the need for development of a new generation of spatial simulation models.
Abstract: Dispersal costs can be classified into energetic, time, risk and opportunity costs and may be levied directly or deferred during departure, transfer and settlement. They may equally be incurred during life stages before the actual dispersal event through investments in special morphologies. Because costs will eventually determine the performance of dispersing individuals and the evolution of dispersal, we here provide an extensive review on the different cost types that occur during dispersal in a wide array of organisms, ranging from micro-organisms to plants, invertebrates and vertebrates. In general, costs of transfer have been more widely documented in actively dispersing organisms, in contrast to a greater focus on costs during departure and settlement in plants and animals with a passive transfer phase. Costs related to the development of specific dispersal attributes appear to be much more prominent than previously accepted. Because costs induce trade-offs, they give rise to covariation between dispersal and other life-history traits at different scales of organismal organisation. The consequences of (i) the presence and magnitude of different costs during different phases of the dispersal process, and (ii) their internal organisation through covariation with other life-history traits, are synthesised with respect to potential consequences for species conservation and the need for development of a new generation of spatial simulation models.

1,049 citations


Journal ArticleDOI
TL;DR: The fundamental mechanisms of WBAN including architecture and topology, wireless implant communication, low-power Medium Access Control (MAC) and routing protocols are reviewed and many useful solutions are discussed for each layer.
Abstract: Recent advances in microelectronics and integrated circuits, system-on-chip design, wireless communication and intelligent low-power sensors have allowed the realization of a Wireless Body Area Network (WBAN). A WBAN is a collection of low-power, miniaturized, invasive/non-invasive lightweight wireless sensor nodes that monitor the human body functions and the surrounding environment. In addition, it supports a number of innovative and interesting applications such as ubiquitous healthcare, entertainment, interactive gaming, and military applications. In this paper, the fundamental mechanisms of WBAN including architecture and topology, wireless implant communication, low-power Medium Access Control (MAC) and routing protocols are reviewed. A comprehensive study of the proposed technologies for WBAN at Physical (PHY), MAC, and Network layers is presented and many useful solutions are discussed for each layer. Finally, numerous WBAN applications are highlighted.

788 citations


Journal ArticleDOI
29 Mar 2012
TL;DR: In this article, the authors reported results from searches for the standard model Higgs boson in proton-proton collisions at square root(s) = 7 TeV in five decay modes: gamma pair, b-quark pair, tau lepton pair, W pair, and Z pair.
Abstract: Combined results are reported from searches for the standard model Higgs boson in proton-proton collisions at sqrt(s)=7 TeV in five Higgs boson decay modes: gamma pair, b-quark pair, tau lepton pair, W pair, and Z pair. The explored Higgs boson mass range is 110-600 GeV. The analysed data correspond to an integrated luminosity of 4.6-4.8 inverse femtobarns. The expected excluded mass range in the absence of the standard model Higgs boson is 118-543 GeV at 95% CL. The observed results exclude the standard model Higgs boson in the mass range 127-600 GeV at 95% CL, and in the mass range 129-525 GeV at 99% CL. An excess of events above the expected standard model background is observed at the low end of the explored mass range making the observed limits weaker than expected in the absence of a signal. The largest excess, with a local significance of 3.1 sigma, is observed for a Higgs boson mass hypothesis of 124 GeV. The global significance of observing an excess with a local significance greater than 3.1 sigma anywhere in the search range 110-600 (110-145) GeV is estimated to be 1.5 sigma (2.1 sigma). More data are required to ascertain the origin of this excess.

786 citations


Journal ArticleDOI
01 Jan 2012-Gut
TL;DR: In many countries the high rate of clarithromycin resistance no longer allows its empirical use in standard anti-H pylori regimens, so knowledge of outpatient antibiotic consumption may provide a simple tool to predict the susceptibility of H pyloris to quinolones and to macrolides and to adapt the treatment strategies.
Abstract: In many countries the high rate of clarithromycin resistance no longer allows its empirical use in standard anti-H pylori regimens. The knowledge of outpatient antibiotic consumption may provide a simple tool to predict the susceptibility of H pylori to quinolones and to macrolides and to adapt the treatment strategies.

764 citations


Journal ArticleDOI
TL;DR: The total exposure to BPA is several orders of magnitude lower than the current tolerable daily intake of 50 μg/kg bw/day, and the use of urinary concentrations from biomonitoring studies seems reliable for the overall exposure assessment.

731 citations


Book
14 Dec 2012
TL;DR: Hutto and Myin this article defend the counter-thesis that there can be intentionality and phenomenal experience without content, and demonstrate the advantages of their approach for thinking about scaffolded minds and consciousness.
Abstract: Most of what humans do and experience is best understood in terms of dynamically unfolding interactions with the environment. Many philosophers and cognitive scientists now acknowledge the critical importance of situated, environment-involving embodied engagements as a means of understanding basic minds -- including basic forms of human mentality. Yet many of these same theorists hold fast to the view that basic minds are necessarily or essentially contentful -- that they represent conditions the world might be in. In this book, Daniel Hutto and Erik Myin promote the cause of a radically enactive, embodied approach to cognition that holds that some kinds of minds -- basic minds -- are neither best explained by processes involving the manipulation of contents nor inherently contentful. Hutto and Myin oppose the widely endorsed thesis that cognition always and everywhere involves content. They defend the counter-thesis that there can be intentionality and phenomenal experience without content, and demonstrate the advantages of their approach for thinking about scaffolded minds and consciousness

688 citations


Journal ArticleDOI
TL;DR: A standardised methodology for a combined point prevalence survey (PPS) on healthcare-associated infections (HAIs) and antimicrobial use in European acute care hospitals developed by the European Centre for Disease Prevention and Control was piloted across Europe.
Abstract: A standardised methodology for a combined point prevalence survey (PPS) on healthcare-associated infections (HAIs) and antimicrobial use in European acute care hospitals developed by the European Centre for Disease Prevention and Control was piloted across Europe. Variables were collected at national, hospital and patient level in 66 hospitals from 23 countries. A patient-based and a unit-based protocol were available. Feasibility was assessed via national and hospital questionnaires. Of 19,888 surveyed patients, 7.1% had an HAI and 34.6% were receiving at least one antimicrobial agent. Prevalence results were highest in intensive care units, with 28.1% patients with HAI, and 61.4% patients with antimicrobial use. Pneumonia and other lower respiratory tract infections (2.0% of patients; 95% confidence interval (CI): 1.8–2.2%) represented the most common type (25.7%) of HAI. Surgical prophylaxis was the indication for 17.3% of used antimicrobials and exceeded one day in 60.7% of cases. Risk factors in the patient-based protocol were provided for 98% or more of the included patients and all were independently associated with both presence of HAI and receiving an antimicrobial agent. The patient-based protocol required more work than the unit-based protocol, but allowed collecting detailed data and analysis of risk factors for HAI and antimicrobial use.

Journal ArticleDOI
TL;DR: In this article, the electronic properties of two-dimensional honeycomb structures of molybdenum disulfide (MoS2) subjected to biaxial strain have been investigated using first-principles calculations based on density functional theory.
Abstract: The electronic properties of two-dimensional honeycomb structures of molybdenum disulfide (MoS2) subjected to biaxial strain have been investigated using first-principles calculations based on density functional theory. On applying compressive or tensile bi-axial strain on bi-layer and mono-layer MoS2, the electronic properties are predicted to change from semiconducting to metallic. These changes present very interesting possibilities for engineering the electronic properties of two-dimensional structures of MoS2.

Journal ArticleDOI
TL;DR: The GGGGCC repeat expansion is highly penetrant, explaining all of the contribution of chromosome 9p21 to FTLD and ALS in the Flanders-Belgian cohort and decreased expression of C9orf72 in brain suggests haploinsufficiency as an underlying disease mechanism.
Abstract: Summary Background Amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD) are extremes of a clinically, pathologically, and genetically overlapping disease spectrum. A locus on chromosome 9p21 has been associated with both disorders, and we aimed to identify the causal gene within this region. Methods We studied 305 patients with FTLD, 137 with ALS, and 23 with concomitant FTLD and ALS (FTLD-ALS) and 856 controls from Flanders (Belgium); patients were identified from a hospital-based cohort and were negative for mutations in known FTLD and ALS genes. We also examined the family of one patient with FTLD-ALS previously linked to 9p21 (family DR14). We analysed 130 kbp at 9p21 in association and segregation studies, genomic sequencing, repeat genotyping, and expression studies to identify the causal mutation. We compared genotype-phenotype correlations between mutation carriers and non-carriers. Findings In the patient-control cohort, the single-nucleotide polymorphism rs28140707 within the 130 kbp region of 9p21 was associated with disease (odds ratio [OR] 2·6, 95% CI 1·5–4·7; p=0·001). A GGGGCC repeat expansion in C9orf72 completely co-segregated with disease in family DR14. The association of rs28140707 with disease in the patient-control cohort was abolished when we excluded GGGGCC repeat expansion carriers. In patients with familial disease, six (86%) of seven with FTLD-ALS, seven (47%) of 15 with ALS, and 12 (16%) of 75 with FTLD had the repeat expansion. In patients without known familial disease, one (6%) of 16 with FTLD-ALS, six (5%) of 122 with ALS, and nine (4%) of 230 with FTLD had the repeat expansion. Mutation carriers primarily presented with classic ALS (10 of 11 individuals) or behavioural variant FTLD (14 of 15 individuals). Mean age at onset of FTLD was 55·3 years (SD 8·4) in 21 mutation carriers and 63·2 years (9·6) in 284 non-carriers (p=0·001); mean age at onset of ALS was 54·5 years (9·9) in 13 carriers and 60·4 years (11·4) in 124 non-carriers. Postmortem neuropathological analysis of the brains of three mutation carriers with FTLD showed a notably low TDP-43 load. In brain at postmortem, C9orf72 expression was reduced by nearly 50% in two carriers compared with nine controls (p=0·034). In familial patients, 14% of FTLD-ALS, 50% of ALS, and 62% of FTLD was not accounted for by known disease genes. Interpretation We identified a pathogenic GGGGCC repeat expansion in C9orf72 on chromosome 9p21, as recently also reported in two other studies. The GGGGCC repeat expansion is highly penetrant, explaining all of the contribution of chromosome 9p21 to FTLD and ALS in the Flanders-Belgian cohort. Decreased expression of C9orf72 in brain suggests haploinsufficiency as an underlying disease mechanism. Unidentified genes probably also contribute to the FTLD-ALS disease spectrum. Funding Full funding sources listed at end of paper (see Acknowledgments).

Journal ArticleDOI
TL;DR: The primary target audience of this position paper is clinicians who have limited orientation with CPX but whose caregiving would be enhanced by familiarity and application of this assessment, and a series of forms designed to highlight the utility of CPX in clinical decision-making.
Abstract: From an evidence-based perspective, cardiopulmonary exercise testing (CPX) is a well-supported assessment technique in both the United States (US) and Europe. The combination of standard exercise testing (ET) (ie, progressive exercise provocation in association with serial electrocardiograms [ECG], hemodynamics, oxygen saturation, and subjective symptoms) and measurement of ventilatory gas exchange amounts to a superior method to: 1) accurately quantify cardiorespiratory fitness (CRF), 2) delineate the physiologic system(s) underlying exercise responses, which can be applied as a means to identify the exercise-limiting pathophysiologic mechanism(s) and/or performance differences, and 3) formulate function-based prognostic stratification. Cardiopulmonary ET certainly carries an additional cost as well as competency requirements and is not an essential component of evaluation in all patient populations. However, there are several conditions of confirmed, suspected, or unknown etiology where the data gained from this form of ET is highly valuable in terms of clinical decision making.1 Several CPX statements have been published by well-respected organizations in both the US and Europe.1–5 Despite these prominent reports and the plethora of pertinent medical literature which they feature, underutilization of CPX persists. This discrepancy is at least partly attributable to the fact that the currently available CPX consensus statements are inherently complex and fail to convey succinct, clinically centered strategies to utilize CPX indices effectively. Likewise, current CPX software packages generate an overwhelming abundance of data, which to most clinicians are incomprehensible and abstract. Ironically, in contrast to the protracted scientific statements and dense CPX data outputs, the list of CPX variables that have proven clinical application is concise and uncomplicated. Therefore, the goal of this writing group is to present an approach of CPX in a way that assists in making meaningful decisions regarding a patient’s care. Experts from the European Association for Cardiovascular Prevention and Rehabilitation and American Heart Association have joined in this effort to distill easy-to-follow guidance on CPX interpretation based upon current scientific evidence. This document also provides a series of forms that are designed to highlight the utility of CPX in clinical decision-making. Not only will this improve patient management, it will also catalyze uniform and unambiguous data interpretation across laboratories on an international level. The primary target audience of this position paper is clinicians who have limited orientation with CPX but whose caregiving would be enhanced by familiarity and application of this assessment. The ultimate goal is to increase awareness of the value of CPX and to increase the number of healthcare professionals who are able to perform clinically meaningful CPX interpretation. Moreover, this document will hopefully lead to an increase in appropriate patient referrals to CPX with enhanced efficiencies in patient management. For more detailed information on CPX, including procedures for patient preparation, equipment calibration, and conducting the test, readers are encouraged to review other publications that address these and other topics in great detail.1–5

Journal ArticleDOI
15 Mar 2012-Nature
TL;DR: This study demonstrates that the lipid sensor GPR120 has a key role in sensing dietary fat and, therefore, in the control of energy balance in both humans and rodents.
Abstract: Free fatty acids provide an important energy source as nutrients, and act as signalling molecules in various cellular processes. Several G-protein-coupled receptors have been identified as free-fatty-acid receptors important in physiology as well as in several diseases. GPR120 (also known as O3FAR1) functions as a receptor for unsaturated long-chain free fatty acids and has a critical role in various physiological homeostasis mechanisms such as adipogenesis, regulation of appetite and food preference. Here we show that GPR120-deficient mice fed a high-fat diet develop obesity, glucose intolerance and fatty liver with decreased adipocyte differentiation and lipogenesis and enhanced hepatic lipogenesis. Insulin resistance in such mice is associated with reduced insulin signalling and enhanced inflammation in adipose tissue. In human, we show that GPR120 expression in adipose tissue is significantly higher in obese individuals than in lean controls. GPR120 exon sequencing in obese subjects reveals a deleterious non-synonymous mutation (p.R270H) that inhibits GPR120 signalling activity. Furthermore, the p.R270H variant increases the risk of obesity in European populations. Overall, this study demonstrates that the lipid sensor GPR120 has a key role in sensing dietary fat and, therefore, in the control of energy balance in both humans and rodents.

Journal ArticleDOI
TL;DR: An updated inventory of microorganisms used in food fermentations covering a wide range of food matrices and the taxonomy is reviewed and updated in order to bring theTaxonomy in agreement with the current standing in nomenclature.

Journal ArticleDOI
TL;DR: A broad overview of recent numerical models that quantify the formation and evolution of salt marshes under different physical and ecological drivers is presented in this article, focusing on the coupling between geomorphological and ecological processes and how these feedbacks are included in predictive models of landform evolution.
Abstract: Salt marshes are delicate landforms at the boundary between the sea and land. These ecosystems support a diverse biota that modifies the erosive characteristics of the substrate and mediates sediment transport processes. Here we present a broad overview of recent numerical models that quantify the formation and evolution of salt marshes under different physical and ecological drivers. In particular, we focus on the coupling between geomorphological and ecological processes and on how these feedbacks are included in predictive models of landform evolution. We describe in detail models that simulate fluxes of water, organic matter, and sediments in salt marshes. The interplay between biological and morphological processes often produces a distinct scarp between salt marshes and tidal flats. Numerical models can capture the dynamics of this boundary and the progradation or regression of the marsh in time. Tidal channels are also key features of the marsh landscape, flooding and draining the marsh platform and providing a source of sediments and nutrients to the marsh ecosystem. In recent years, several numerical models have been developed to describe the morphogenesis and long-term dynamics of salt marsh channels. Finally, salt marshes are highly sensitive to the effects of long-term climatic change. We therefore discuss in detail how numerical models have been used to determine salt marsh survival under different scenarios of sea level rise.

Journal ArticleDOI
TL;DR: In this article, the performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at the LHC in 2010.
Abstract: The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta)<2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.

Journal ArticleDOI
TL;DR: The Large Hadron Electron Collider (LHeC) as discussed by the authors was designed to achieve an integrated luminosity of O(100 ),fb$^{-1}, which is the cleanest high resolution microscope of mankind.
Abstract: This document provides a brief overview of the recently published report on the design of the Large Hadron Electron Collider (LHeC), which comprises its physics programme, accelerator physics, technology and main detector concepts. The LHeC exploits and develops challenging, though principally existing, accelerator and detector technologies. This summary is complemented by brief illustrations of some of the highlights of the physics programme, which relies on a vastly extended kinematic range, luminosity and unprecedented precision in deep inelastic scattering. Illustrations are provided regarding high precision QCD, new physics (Higgs, SUSY) and electron-ion physics. The LHeC is designed to run synchronously with the LHC in the twenties and to achieve an integrated luminosity of O(100)\,fb$^{-1}$. It will become the cleanest high resolution microscope of mankind and will substantially extend as well as complement the investigation of the physics of the TeV energy scale, which has been enabled by the LHC.

Journal ArticleDOI
TL;DR: In this article, the authors compare the results of various cosmological gas-dynamical codes used to simulate the formation of a galaxy in the Λ cold dark matter structure formation paradigm.
Abstract: We compare the results of various cosmological gas-dynamical codes used to simulate the formation of a galaxy in the Λ cold dark matter structure formation paradigm. The various runs (13 in total) differ in their numerical hydrodynamical treatment [smoothed particle hydrodynamics (SPH), moving mesh and adaptive mesh refinement] but share the same initial conditions and adopt in each case their latest published model of gas cooling, star formation and feedback. Despite the common halo assembly history, we find large code-to-code variations in the stellar mass, size, morphology and gas content of the galaxy at z= 0, due mainly to the different implementations of star formation and feedback. Compared with observation, most codes tend to produce an overly massive galaxy, smaller and less gas rich than typical spirals, with a massive bulge and a declining rotation curve. A stellar disc is discernible in most simulations, although its prominence varies widely from code to code. There is a well-defined trend between the effects of feedback and the severity of the disagreement with observed spirals. In general, models that are more effective at limiting the baryonic mass of the galaxy come closer to matching observed galaxy scaling laws, but often to the detriment of the disc component. Although numerical convergence is not particularly good for any of the codes, our conclusions hold at two different numerical resolutions. Some differences can also be traced to the different numerical techniques; for example, more gas seems able to cool and become available for star formation in grid-based codes than in SPH. However, this effect is small compared to the variations induced by different feedback prescriptions. We conclude that state-of-the-art simulations cannot yet uniquely predict the properties of the baryonic component of a galaxy, even when the assembly history of its host halo is fully specified. Developing feedback algorithms that can effectively regulate the mass of a galaxy without hindering the formation of high angular momentum stellar discs remains a challenge.

Journal ArticleDOI
TL;DR: Analysis of dust collected in California homes in 2006 and 2011 for 62 FRs and organohalogens suggests that manufacturers continue to use hazardous chemicals and replace chemicals of concern with chemicals with uncharacterized toxicity.
Abstract: Higher house dust levels of PBDE flame retardants (FRs) have been reported in California than other parts of the world, due to the state’s furniture flammability standard. However, changing levels of these and other FRs have not been evaluated following the 2004 U.S. phase-out of PentaBDE and OctaBDE. We analyzed dust collected in 16 California homes in 2006 and again in 2011 for 62 FRs and organohalogens, which represents the broadest investigation of FRs in homes. Fifty-five compounds were detected in at least one sample; 41 in at least 50% of samples. Concentrations of chlorinated OPFRs, including two (TCEP and TDCIPP) listed as carcinogens under California’s Proposition 65, were found up to 0.01% in dust, higher than previously reported in the U.S. In 75% of the homes, we detected TDBPP, or brominated “Tris,” which was banned in children’s sleepwear because of carcinogenicity. To our knowledge, this is the first report on TDBPP in house dust. Concentrations of Firemaster 550 components (EH-TBB, BEH-TE...

Journal ArticleDOI
TL;DR: Human-induced carbon and nitrogen fertilization are generating a strong imbalance with P, affecting carbon sequestration potential and the structure, function and evolution of the Earth’s ecosystems.
Abstract: Human-induced carbon and nitrogen fertilization are generating a strong imbalance with P. This imbalance confers an increasingly important role to P availability and N : P ratio in the Earth’s life system, affecting carbon sequestration potential and the structure, function and evolution of the Earth’s ecosystems.

Journal ArticleDOI
TL;DR: The transverse momentum spectra of charged particles have been measured in pp and PbPb collisions at 2.76 TeV by the CMS experiment at the LHC as mentioned in this paper.
Abstract: The transverse momentum spectra of charged particles have been measured in pp and PbPb collisions at sqrt(sNN) = 2.76 TeV by the CMS experiment at the LHC. In the transverse momentum range pt = 5-10 GeV/c, the charged particle yield in the most central PbPb collisions is suppressed by up to a factor of 5 compared to the pp yield scaled by the number of incoherent nucleon-nucleon collisions. At higher pt, this suppression is significantly reduced, approaching roughly a factor of 2 for particles with pt in the range pt=40-100 GeV/c.

Journal ArticleDOI
TL;DR: In this article, the transition metal L 2,3 electron energy-loss spectra for a wide range of V-, Mn- and Fe-based oxides were recorded and carefully analyzed for their correlation with the formal oxidation states of transition metal ions.

Journal ArticleDOI
TL;DR: The Alzheimer disease and frontotemporal dementia (AD&FTLD) and Parkinson disease (PD) Mutation Databases make available curated information of sequence variations in genes causing Mendelian forms of the most common neurodegenerative brain disease AD, frontothemporal lobar degeneration (FTLD), and PD.
Abstract: The Alzheimer disease and frontotemporal dementia (AD&FTLD) and Parkinson disease (PD) Mutation Databases make available curated information of sequence variations in genes causing Mendelian forms of the most common neurodegenerative brain disease AD, frontotemporal lobar degeneration (FTLD), and PD. They are established resources for clinical geneticists, neurologists, and researchers in need of comprehensive, referenced genetic, epidemiologic, clinical, neuropathological, and/or cell biological information of specific gene mutations in these diseases. In addition, the aggregate analysis of all information available in the databases provides unique opportunities to extract mutation characteristics and genotype–phenotype correlations, which would be otherwise unnoticed and unexplored. Such analyses revealed that 61.4% of mutations are private to one single family, while only 5.7% of mutations occur in 10 or more families. The five mutations with most frequent independent observations occur in 21% of AD, 43% of FTLD, and 48% of PD families recorded in the Mutation Databases, respectively. Although these figures are inevitably biased by a publishing policy favoring novel mutations, they probably also reflect the occurrence of multiple rare and few relatively common mutations in the inherited forms of these diseases. Finally, with the exception of the PD genes PARK2 and PINK1, all other genes are associated with more than one clinical diagnosis or characteristics thereof. Hum Mutat 33:1340–1344, 2012. © 2012 Wiley Periodicals, Inc.


Journal ArticleDOI
TL;DR: Improved outcomes for patients with severe traumatic brain injury could result from progress in pharmacological and other treatments, neural repair and regeneration, optimisation of surgical indications and techniques, and combination and individually targeted treatments.

Journal ArticleDOI
TL;DR: A systematic and holistic approach to investigate how soil and plant community characteristics change with altered precipitation regimes and the consequent effects on ecosystem processes and functioning within these experiments will greatly increase their value to the climate change and ecosystem research communities.
Abstract: Climatic changes, including altered precipitation regimes, will affect key ecosystem processes, such as plant productivity and biodiversity for many terrestrial ecosystems. Past and ongoing precipitation experiments have been conducted to quantify these potential changes. An analysis of these experiments indicates that they have provided important information on how water regulates ecosystem processes. However, they do not adequately represent global biomes nor forecasted precipitation scenarios and their potential contribution to advance our understanding of ecosystem responses to precipitation changes is therefore limited, as is their potential value for the development and testing of ecosystem models. This highlights the need for new precipitation experiments in biomes and ambient climatic conditions hitherto poorly studied applying relevant complex scenarios including changes in precipitation frequency and amplitude, seasonality, extremity and interactions with other global change drivers. A systematic and holistic approach to investigate how soil and plant community characteristics change with altered precipitation regimes and the consequent effects on ecosystem processes and functioning within these experiments will greatly increase their value to the climate change and ecosystem research communities. Experiments should specifically test how changes in precipitation leading to exceedance of biological thresholds affect ecosystem resilience and acclimation.

Journal ArticleDOI
TL;DR: This study investigated whether KCNQ2/3 mutations are a frequent cause of epileptic encephalopathies with an early onset and whether a recognizable phenotype exists.
Abstract: OBJECTIVE: KCNQ2 and KCNQ3 mutations are known to be responsible for benign familial neonatal seizures (BFNS). A few reports on patients with a KCNQ2 mutation with a more severe outcome exist, but a definite relationship has not been established. In this study we investigated whether KCNQ2/3 mutations are a frequent cause of epileptic encephalopathies with an early onset and whether a recognizable phenotype exists. METHODS: We analyzed 80 patients with unexplained neonatal or early-infantile seizures and associated psychomotor retardation for KCNQ2 and KCNQ3 mutations. Clinical and imaging data were reviewed in detail. RESULTS: We found 7 different heterozygous KCNQ2 mutations in 8 patients (8/80; 10%); 6 mutations arose de novo. One parent with a milder phenotype was mosaic for the mutation. No KCNQ3 mutations were found. The 8 patients had onset of intractable seizures in the first week of life with a prominent tonic component. Seizures generally resolved by age 3 years but the children had profound, or less frequently severe, intellectual disability with motor impairment. Electroencephalography (EEG) at onset showed a burst-suppression pattern or multifocal epileptiform activity. Early magnetic resonance imaging (MRI) of the brain showed characteristic hyperintensities in the basal ganglia and thalamus that later resolved. INTERPRETATION: KCNQ2 mutations are found in a substantial proportion of patients with a neonatal epileptic encephalopathy with a potentially recognizable electroclinical and radiological phenotype. This suggests that KCNQ2 screening should be included in the diagnostic workup of refractory neonatal seizures of unknown origin.

Journal ArticleDOI
TL;DR: The hypothesis that compensatory autocrine and/or paracrine events contribute to the pathogenesis of TGF-β–mediated vasculopathies is supported.
Abstract: Loeys-Dietz syndrome (LDS) associates with a tissue signature for high transforming growth factor (TGF)-β signaling but is often caused by heterozygous mutations in genes encoding positive effectors of TGF-β signaling, including either subunit of the TGF-β receptor or SMAD3, thereby engendering controversy regarding the mechanism of disease. Here, we report heterozygous mutations or deletions in the gene encoding the TGF-β2 ligand for a phenotype within the LDS spectrum and show upregulation of TGF-β signaling in aortic tissue from affected individuals. Furthermore, haploinsufficient Tgfb2(+/-) mice have aortic root aneurysm and biochemical evidence of increased canonical and noncanonical TGF-β signaling. Mice that harbor both a mutant Marfan syndrome (MFS) allele (Fbn1(C1039G/+)) and Tgfb2 haploinsufficiency show increased TGF-β signaling and phenotypic worsening in association with normalization of TGF-β2 expression and high expression of TGF-β1. Taken together, these data support the hypothesis that compensatory autocrine and/or paracrine events contribute to the pathogenesis of TGF-β-mediated vasculopathies.