scispace - formally typeset
Search or ask a question

Showing papers by "Radboud University Nijmegen published in 2010"


Journal ArticleDOI
TL;DR: An international Expert Panel that conducted a systematic review and evaluation of the literature and developed recommendations for optimal IHC ER/PgR testing performance recommended that ER and PgR status be determined on all invasive breast cancers and breast cancer recurrences.
Abstract: Purpose To develop a guideline to improve the accuracy of immunohistochemical (IHC) estrogen receptor (ER) and progesterone receptor (PgR) testing in breast cancer and the utility of these receptors as predictive markers. Methods The American Society of Clinical Oncology and the College of American Pathologists convened an international Expert Panel that conducted a systematic review and evaluation of the literature in partnership with Cancer Care Ontario and developed recommendations for optimal IHC ER/PgR testing performance. Results Up to 20% of current IHC determinations of ER and PgR testing worldwide may be inaccurate (false negative or false positive). Most of the issues with testing have occurred because of variation in preanalytic variables, thresholds for positivity, and interpretation criteria. Recommendations The Panel recommends that ER and PgR status be determined on all invasive breast cancers and breast cancer recurrences. A testing algorithm that relies on accurate, reproducible assay performance is proposed. Elements to reliably reduce assay variation are specified. It is recommended that ER and PgR assays be considered positive if there are at least 1% positive tumor nuclei in the sample on testing in the presence of expected reactivity of internal (normal epithelial elements) and external controls. The absence of benefit from endocrine therapy for women with ER-negative invasive breast cancers has been confirmed in large overviews of randomized clinical trials.

3,902 citations


Journal ArticleDOI
TL;DR: The 2014 RCC guideline has been updated by a multidisciplinary panel using the highest methodological standards, and provides the best and most reliable contemporary evidence base for RCC management.

3,100 citations


Journal ArticleDOI
TL;DR: The 1000 Functional Connectomes Project (Fcon_1000) as discussed by the authors is a large-scale collection of functional connectome data from 1,414 volunteers collected independently at 35 international centers.
Abstract: Although it is being successfully implemented for exploration of the genome, discovery science has eluded the functional neuroimaging community. The core challenge remains the development of common paradigms for interrogating the myriad functional systems in the brain without the constraints of a priori hypotheses. Resting-state functional MRI (R-fMRI) constitutes a candidate approach capable of addressing this challenge. Imaging the brain during rest reveals large-amplitude spontaneous low-frequency (<0.1 Hz) fluctuations in the fMRI signal that are temporally correlated across functionally related areas. Referred to as functional connectivity, these correlations yield detailed maps of complex neural systems, collectively constituting an individual's "functional connectome." Reproducibility across datasets and individuals suggests the functional connectome has a common architecture, yet each individual's functional connectome exhibits unique features, with stable, meaningful interindividual differences in connectivity patterns and strengths. Comprehensive mapping of the functional connectome, and its subsequent exploitation to discern genetic influences and brain-behavior relationships, will require multicenter collaborative datasets. Here we initiate this endeavor by gathering R-fMRI data from 1,414 volunteers collected independently at 35 international centers. We demonstrate a universal architecture of positive and negative functional connections, as well as consistent loci of inter-individual variability. Age and sex emerged as significant determinants. These results demonstrate that independent R-fMRI datasets can be aggregated and shared. High-throughput R-fMRI can provide quantitative phenotypes for molecular genetic studies and biomarkers of developmental and pathological processes in the brain. To initiate discovery science of brain function, the 1000 Functional Connectomes Project dataset is freely accessible at www.nitrc.org/projects/fcon_1000/.

2,787 citations


Journal ArticleDOI
TL;DR: Genetic loci associated with body mass index map near key hypothalamic regulators of energy balance, and one of these loci is near GIPR, an incretin receptor, which may provide new insights into human body weight regulation.
Abstract: Obesity is globally prevalent and highly heritable, but its underlying genetic factors remain largely elusive. To identify genetic loci for obesity susceptibility, we examined associations between body mass index and similar to 2.8 million SNPs in up to 123,865 individuals with targeted follow up of 42 SNPs in up to 125,931 additional individuals. We confirmed 14 known obesity susceptibility loci and identified 18 new loci associated with body mass index (P < 5 x 10(-8)), one of which includes a copy number variant near GPRC5B. Some loci (at MC4R, POMC, SH2B1 and BDNF) map near key hypothalamic regulators of energy balance, and one of these loci is near GIPR, an incretin receptor. Furthermore, genes in other newly associated loci may provide new insights into human body weight regulation.

2,632 citations


Journal ArticleDOI
TL;DR: A strategy was needed to differentiate among guidelines and ensure that those guidelines that vary widely in quality are distinguished.
Abstract: Clinical practice guidelines, which are systematically developed statements aimed at helping people make clinical, policy-related and system-related decisions,[1][1],[2][2] frequently vary widely in quality.[3][3],[4][4] A strategy was needed to differentiate among guidelines and ensure that those

2,616 citations


Journal ArticleDOI
TL;DR: It is proposed that information is gated by inhibiting task-irrelevant regions, thus routing information to task-relevant regions and the empirical support for this framework is discussed.
Abstract: In order to understand the working brain as a network, it is essential to identify the mechanisms by which information is gated between regions. We here propose that information is gated by inhibiting task-irrelevant regions, thus routing information to task-relevant regions. The functional inhibition is reflected in oscillatory activity in the alpha band (8-13 Hz). From a physiological perspective the alpha activity provides pulsed inhibition reducing the processing capabilities of a given area. Active processing in the engaged areas is reflected by neuronal synchronization in the gamma band (30-100 Hz) accompanied by an alpha band decrease. According to this framework the brain should be studied as a network by investigating cross-frequency interactions between gamma and alpha activity. Specifically the framework predicts that optimal task performance will correlate with alpha activity in task-irrelevant areas. In this review we will discuss the empirical support for this framework. Given that alpha activity is by far the strongest signal recorded by EEG and MEG, we propose that a major part of the electrophysiological activity detected from the working brain reflects gating by inhibition.

2,448 citations


Journal ArticleDOI
TL;DR: Ecosystems thought of as not N limited, such as tropical and subtropical systems, may be more vulnerable in the regeneration phase, in situations where heterogeneity in N availability is reduced by atmospheric N deposition, on sandy soils, or in montane areas.
Abstract: Atmospheric nitrogen (N) deposition is a recognized threat to plant diversity in temperate and northern parts of Europe and North America. This paper assesses evidence from field experiments for N deposition effects and thresholds for terrestrial plant diversity protection across a latitudinal range of main categories of ecosystems, from arctic and boreal systems to tropical forests. Current thinking on the mechanisms of N deposition effects on plant diversity, the global distribution of G200 ecoregions, and current and future (2030) estimates of atmospheric N-deposition rates are then used to identify the risks to plant diversity in all major ecosystem types now and in the future. This synthesis paper clearly shows that N accumulation is the main driver of changes to species composition across the whole range of different ecosystem types by driving the competitive interactions that lead to composition change and/or making conditions unfavorable for some species. Other effects such as direct toxicity of nitrogen gases and aerosols, long-term negative effects of increased ammonium and ammonia availability, soil-mediated effects of acidification, and secondary stress and disturbance are more ecosystem- and site-specific and often play a supporting role. N deposition effects in mediterranean ecosystems have now been identified, leading to a first estimate of an effect threshold. Importantly, ecosystems thought of as not N limited, such as tropical and subtropical systems, may be more vulnerable in the regeneration phase, in situations where heterogeneity in N availability is reduced by atmospheric N deposition, on sandy soils, or in montane areas. Critical loads are effect thresholds for N deposition, and the critical load concept has helped European governments make progress toward reducing N loads on sensitive ecosystems. More needs to be done in Europe and North America, especially for the more sensitive ecosystem types, including several ecosystems of high conservation importance. The results of this assessment show that the vulnerable regions outside Europe and North America which have not received enough attention are ecoregions in eastern and southern Asia (China, India), an important part of the mediterranean ecoregion (California, southern Europe), and in the coming decades several subtropical and tropical parts of Latin America and Africa. Reductions in plant diversity by increased atmospheric N deposition may be more widespread than first thought, and more targeted studies are required in low background areas, especially in the G200 ecoregions.

2,154 citations


Journal ArticleDOI
Thomas J. Hudson1, Thomas J. Hudson2, Warwick Anderson3, Axel Aretz4  +270 moreInstitutions (92)
15 Apr 2010
TL;DR: Systematic studies of more than 25,000 cancer genomes will reveal the repertoire of oncogenic mutations, uncover traces of the mutagenic influences, define clinically relevant subtypes for prognosis and therapeutic management, and enable the development of new cancer therapies.
Abstract: The International Cancer Genome Consortium (ICGC) was launched to coordinate large-scale cancer genome studies in tumours from 50 different cancer types and/or subtypes that are of clinical and societal importance across the globe. Systematic studies of more than 25,000 cancer genomes at the genomic, epigenomic and transcriptomic levels will reveal the repertoire of oncogenic mutations, uncover traces of the mutagenic influences, define clinically relevant subtypes for prognosis and therapeutic management, and enable the development of new cancer therapies.

2,041 citations


Journal ArticleDOI
TL;DR: The present article presents the freely available Radboud Faces Database, a face database in which displayed expressions, gaze direction, and head orientation are parametrically varied in a complete factorial design, containing both Caucasian adult and children images.
Abstract: Many research fields concerned with the processing of information contained in human faces would benefit from face stimulus sets in which specific facial characteristics are systematically varied while other important picture characteristics are kept constant. Specifically, a face database in which displayed expressions, gaze direction, and head orientation are parametrically varied in a complete factorial design would be highly useful in many research domains. Furthermore, these stimuli should be standardised in several important, technical aspects. The present article presents the freely available Radboud Faces Database offering such a stimulus set, containing both Caucasian adult and children images. This face database is described both procedurally and in terms of content, and a validation study concerning its most important characteristics is presented. In the validation study, all frontal images were rated with respect to the shown facial expression, intensity of expression, clarity of expression, genuineness of expression, attractiveness, and valence. The results show very high recognition of the intended facial expressions.

2,041 citations


Journal ArticleDOI
Hana Lango Allen1, Karol Estrada2, Guillaume Lettre3, Sonja I. Berndt4  +341 moreInstitutions (90)
14 Oct 2010-Nature
TL;DR: It is shown that hundreds of genetic variants, in at least 180 loci, influence adult height, a highly heritable and classic polygenic trait, and indicates that GWA studies can identify large numbers of loci that implicate biologically relevant genes and pathways.
Abstract: Most common human traits and diseases have a polygenic pattern of inheritance: DNA sequence variants at many genetic loci influence the phenotype. Genome-wide association (GWA) studies have identified more than 600 variants associated with human traits, but these typically explain small fractions of phenotypic variation, raising questions about the use of further studies. Here, using 183,727 individuals, we show that hundreds of genetic variants, in at least 180 loci, influence adult height, a highly heritable and classic polygenic trait. The large number of loci reveals patterns with important implications for genetic studies of common human diseases and traits. First, the 180 loci are not random, but instead are enriched for genes that are connected in biological pathways (P = 0.016) and that underlie skeletal growth defects (P < 0.001). Second, the likely causal gene is often located near the most strongly associated variant: in 13 of 21 loci containing a known skeletal growth gene, that gene was closest to the associated variant. Third, at least 19 loci have multiple independently associated variants, suggesting that allelic heterogeneity is a frequent feature of polygenic traits, that comprehensive explorations of already-discovered loci should discover additional variants and that an appreciable fraction of associated loci may have been identified. Fourth, associated variants are enriched for likely functional effects on genes, being over-represented among variants that alter amino-acid structure of proteins and expression levels of nearby genes. Our data explain approximately 10% of the phenotypic variation in height, and we estimate that unidentified common variants of similar effect sizes would increase this figure to approximately 16% of phenotypic variation (approximately 20% of heritable variation). Although additional approaches are needed to dissect the genetic architecture of polygenic human traits fully, our findings indicate that GWA studies can identify large numbers of loci that implicate biologically relevant genes and pathways.

1,768 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that a designed strain aligned along three main crystallographic directions induces strong gauge fields that effectively act as a uniform magnetic field exceeding 10'T, similar to the case of a topological insulator.
Abstract: Owing to the fact that graphene is just one atom thick, it has been suggested that it might be possible to control its properties by subjecting it to mechanical strain. New analysis indicates not only this, but that pseudomagnetic behaviour and even zero-field quantum Hall effects could be induced in graphene under realistic amounts of strain. Among many remarkable qualities of graphene, its electronic properties attract particular interest owing to the chiral character of the charge carriers, which leads to such unusual phenomena as metallic conductivity in the limit of no carriers and the half-integer quantum Hall effect observable even at room temperature1,2,3. Because graphene is only one atom thick, it is also amenable to external influences, including mechanical deformation. The latter offers a tempting prospect of controlling graphene’s properties by strain and, recently, several reports have examined graphene under uniaxial deformation4,5,6,7,8. Although the strain can induce additional Raman features7,8, no significant changes in graphene’s band structure have been either observed or expected for realistic strains of up to ∼15% (refs 9, 10, 11). Here we show that a designed strain aligned along three main crystallographic directions induces strong gauge fields12,13,14 that effectively act as a uniform magnetic field exceeding 10 T. For a finite doping, the quantizing field results in an insulating bulk and a pair of countercirculating edge states, similar to the case of a topological insulator15,16,17,18,19,20. We suggest realistic ways of creating this quantum state and observing the pseudomagnetic quantum Hall effect. We also show that strained superlattices can be used to open significant energy gaps in graphene’s electronic spectrum.

Journal ArticleDOI
Georges Aad1, Brad Abbott1, Jalal Abdallah1, A. A. Abdelalim1  +2582 moreInstitutions (23)
TL;DR: The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid, including supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors.
Abstract: The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components, from the generators that simulate particle collisions, through packages simulating the response of the various detectors and triggers. All of these components come together under the ATLAS simulation infrastructure. In this paper, that infrastructure is discussed, including that supporting the detector description, interfacing the event generation, and combining the GEANT4 simulation of the response of the individual detectors. Also described are the tools allowing the software validation, performance testing, and the validation of the simulated output against known physics processes.

Journal ArticleDOI
25 Mar 2010-Nature
TL;DR: Evidence for a fourth pathway to produce oxygen is presented, possibly of considerable geochemical and evolutionary importance, and opens up the possibility that oxygen was available to microbial metabolism before the evolution of oxygenic photosynthesis.
Abstract: Only three biological pathways are known to produce oxygen: photosynthesis, chlorate respiration and the detoxification of reactive oxygen species. Here we present evidence for a fourth pathway, possibly of considerable geochemical and evolutionary importance. The pathway was discovered after metagenomic sequencing of an enrichment culture that couples anaerobic oxidation of methane with the reduction of nitrite to dinitrogen. The complete genome of the dominant bacterium, named ‘Candidatus Methylomirabilis oxyfera’, was assembled. This apparently anaerobic, denitrifying bacterium encoded, transcribed and expressed the well-established aerobic pathway for methane oxidation, whereas it lacked known genes for dinitrogen production. Subsequent isotopic labelling indicated that ‘M. oxyfera’ bypassed the denitrification intermediate nitrous oxide by the conversion of two nitric oxide molecules to dinitrogen and oxygen, which was used to oxidize methane. These results extend our understanding of hydrocarbon degradation under anoxic conditions and explain the biochemical mechanism of a poorly understood freshwater methane sink. Because nitrogen oxides were already present on early Earth, our finding opens up the possibility that oxygen was available to microbial metabolism before the evolution of oxygenic photosynthesis. A previously unknown pathway producing oxygen during anaerobic methane oxidation linked to nitrite and nitrate reduction has been found in microbes isolated from freshwater sediments in Dutch drainage ditches. The complete genome of the bacterium responsible for this reaction has been assembled, and found to contain genes for aerobic methane oxidation. The bacterium reduces nitrite via the recombination of two molecules of nitric oxide into nitrogen and oxygen, bypassing the familiar denitrification intermediate nitrous oxide. This discovery is relevant to nitrogen and methane cycling in the environment and, since nitrogen oxides arose early on Earth, raises the possibility that oxygen was available to microbes before the advent of oxygen-producing photosynthesis. In certain microbes, the anaerobic oxidation of methane can be linked to the reduction of nitrates and nitrites. Here it is shown that this occurs through the intermediate production of oxygen. This brings the number of known biological pathways for oxygen production to four, with implications for our understanding of life on the early Earth.

Journal ArticleDOI
TL;DR: In this article, the authors review the progress in this field of laser manipulation of magnetic order in a systematic way and show that the polarization of light plays an essential role in the manipulation of the magnetic moments at the femtosecond time scale.
Abstract: The interaction of subpicosecond laser pulses with magnetically ordered materials has developed into a fascinating research topic in modern magnetism. From the discovery of subpicosecond demagnetization over a decade ago to the recent demonstration of magnetization reversal by a single 40 fs laser pulse, the manipulation of magnetic order by ultrashort laser pulses has become a fundamentally challenging topic with a potentially high impact for future spintronics, data storage and manipulation, and quantum computation. Understanding the underlying mechanisms implies understanding the interaction of photons with charges, spins, and lattice, and the angular momentum transfer between them. This paper will review the progress in this field of laser manipulation of magnetic order in a systematic way. Starting with a historical introduction, the interaction of light with magnetically ordered matter is discussed. By investigating metals, semiconductors, and dielectrics, the roles of nearly free electrons, charge redistributions, and spin-orbit and spin-lattice interactions can partly be separated, and effects due to heating can be distinguished from those that are not. It will be shown that there is a fundamental distinction between processes that involve the actual absorption of photons and those that do not. It turns out that for the latter, the polarization of light plays an essential role in the manipulation of the magnetic moments at the femtosecond time scale. Thus, circularly and linearly polarized pulses are shown to act as strong transient magnetic field pulses originating from the nonabsorptive inverse Faraday and inverse Cotton-Mouton effects, respectively. The recent progress in the understanding of magneto-optical effects on the femtosecond time scale together with the mentioned inverse, optomagnetic effects promises a bright future for this field of ultrafast optical manipulation of magnetic order or femtomagnetism.

Journal ArticleDOI
TL;DR: In this article, the authors present a set of recommendations for the treatment of rheumatoid arthritis with synthetic and biological disease-modifying antirheumatic drugs (DMARDs) and glucocorticoids (GCs) that also account for strategic algorithms and deal with economic aspects.
Abstract: Treatment of rheumatoid arthritis (RA) may differ among rheumatologists and currently, clear and consensual international recommendations on RA treatment are not available. In this paper recommendations for the treatment of RA with synthetic and biological disease-modifying antirheumatic drugs (DMARDs) and glucocorticoids (GCs) that also account for strategic algorithms and deal with economic aspects, are described. The recommendations are based on evidence from five systematic literature reviews (SLRs) performed for synthetic DMARDs, biological DMARDs, GCs, treatment strategies and economic issues. The SLR-derived evidence was discussed and summarised as an expert opinion in the course of a Delphi-like process. Levels of evidence, strength of recommendations and levels of agreement were derived. Fifteen recommendations were developed covering an area from general aspects such as remission/low disease activity as treatment aim via the preference for methotrexate monotherapy with or without GCs vis-a-vis combination of synthetic DMARDs to the use of biological agents mainly in patients for whom synthetic DMARDs and tumour necrosis factor inhibitors had failed. Cost effectiveness of the treatments was additionally examined. These recommendations are intended to inform rheumatologists, patients and other stakeholders about a European consensus on the management of RA with DMARDs and GCs as well as strategies to reach optimal outcomes of RA, based on evidence and expert opinion.

Journal ArticleDOI
TL;DR: Severe hypoglycemia was strongly associated with increased risks of a range of adverse clinical outcomes, including respiratory, digestive, and skin conditions, and no relationship was found between repeated episodes of severe hypglycemia and vascular outcomes or death.
Abstract: Background Severe hypoglycemia may increase the risk of a poor outcome in patients with type 2 diabetes assigned to an intensive glucose-lowering intervention. We analyzed data from a large study of intensive glucose lowering to explore the relationship between severe hypoglycemia and adverse clinical outcomes. Methods We examined the associations between severe hypoglycemia and the risks of macrovascular or microvascular events and death among 11,140 patients with type 2 diabetes, using Cox proportional-hazards models with adjustment for covariates measured at baseline and after randomization. Results During a median follow-up period of 5 years, 231 patients (2.1%) had at least one severe hypoglycemic episode; 150 had been assigned to intensive glucose control (2.7% of the 5571 patients in that group), and 81 had been assigned to standard glucose control (1.5% of the 5569 patients in that group). The median times from the onset of severe hypoglycemia to the first major macrovascular event, the first major microvascular event, and death were 1.56 years (interquartile range, 0.84 to 2.41), 0.99 years (interquartile range, 0.40 to 2.17), and 1.05 years (interquartile range, 0.34 to 2.41), respectively. During follow-up, severe hypoglycemia was associated with a significant increase in the adjusted risks of major macrovascular events (hazard ratio, 2.88; 95% confidence interval [CI], 2.01 to 4.12), major microvascular events (hazard ratio, 1.81; 95% CI, 1.19 to 2.74), death from a cardiovascular cause (hazard ratio, 2.68; 95% CI, 1.72 to 4.19), and death from any cause (hazard ratio, 2.69; 95% CI, 1.97 to 3.67) (P<0.001 for all comparisons). Similar associations were apparent for a range of nonvascular outcomes, including respiratory, digestive, and skin conditions (P<0.01 for all comparisons). No relationship was found between repeated episodes of severe hypoglycemia and vascular outcomes or death. Conclusions Severe hypoglycemia was strongly associated with increased risks of a range of adverse clinical outcomes. It is possible that severe hypoglycemia contributes to adverse outcomes, but these analyses indicate that hypoglycemia is just as likely to be a marker of vulnerability to such events. (Funded by Servier and the National Health and Medical Research Council of Australia; ClinicalTrials.gov number, NCT00145925.)

Journal ArticleDOI
TL;DR: Using a multiparameter tuning model, this work describes how dimension, density, stiffness, and orientation of the extracellular matrix together with cell determinants—including cell–cell and cell–matrix adhesion, cytoskeletal polarity and stiffness, etc.—interdependently control migration mode and efficiency.
Abstract: Cell migration underlies tissue formation, maintenance, and regeneration as well as pathological conditions such as cancer invasion. Structural and molecular determinants of both tissue environment and cell behavior define whether cells migrate individually (through amoeboid or mesenchymal modes) or collectively. Using a multiparameter tuning model, we describe how dimension, density, stiffness, and orientation of the extracellular matrix together with cell determinants-including cell-cell and cell-matrix adhesion, cytoskeletal polarity and stiffness, and pericellular proteolysis-interdependently control migration mode and efficiency. Motile cells integrate variable inputs to adjust interactions among themselves and with the matrix to dictate the migration mode. The tuning model provides a matrix of parameters that control cell movement as an adaptive and interconvertible process with relevance to different physiological and pathological contexts.

Journal ArticleDOI
TL;DR: A minimally invasive step-up approach, as compared with open necrosectomy, reduced the rate of the composite end point of major complications or death among patients with necrotizing pancreatitis and infected necrotic tissue.
Abstract: Background Necrotizing pancreatitis with infected necrotic tissue is associated with a high rate of complications and death. Standard treatment is open necrosectomy. The outcome may be improved by a minimally invasive step-up approach. Methods In this multicenter study, we randomly assigned 88 patients with necrotizing pancreatitis and suspected or confirmed infected necrotic tissue to undergo primary open necrosectomy or a step-up approach to treatment. The step-up approach consisted of percutaneous drainage followed, if necessary, by minimally invasive retroperitoneal necrosectomy. The primary end point was a composite of major complications (new-onset multiple-organ failure or multiple systemic complications, perforation of a visceral organ or enterocutaneous fistula, or bleeding) or death. Results The primary end point occurred in 31 of 45 patients (69%) assigned to open necrosectomy and in 17 of 43 patients (40%) assigned to the step-up approach (risk ratio with the step-up approach, 0.57; 95% confidence interval, 0.38 to 0.87; P = 0.006). Of the patients assigned to the step-up approach, 35% were treated with percutaneous drainage only. New-onset multiple-organ failure occurred less often in patients assigned to the step-up approach than in those assigned to open necrosectomy (12% vs. 40%, P = 0.002). The rate of death did not differ significantly between groups (19% vs. 16%, P = 0.70). Patients assigned to the step-up approach had a lower rate of incisional hernias (7% vs. 24%, P = 0.03) and new-onset diabetes (16% vs. 38%, P = 0.02). Conclusions A minimally invasive step-up approach, as compared with open necrosectomy, reduced the rate of the composite end point of major complications or death among patients with necrotizing pancreatitis and infected necrotic tissue. (Current Controlled Trials number, ISRCTN13975868.)

Journal ArticleDOI
20 Dec 2010-Small
TL;DR: F fluorographene is a high-quality insulator that inherits the mechanical strength of graphene, exhibiting a Young's modulus of 100 N m(-1) and sustaining strains of 15%.
Abstract: A stoichiometric derivative of graphene with a fluorine atom attached to each carbon is reported Raman, optical, structural, micromechanical, and transport studies show that the material is qualitatively different from the known graphene-based nonstoichiometric derivatives Fluorographene is a high-quality insulator (resistivity >10(12) Omega) with an optical gap of 3 eV It inherits the mechanical strength of graphene, exhibiting a Young's modulus of 100 N m(-1) and sustaining strains of 15% Fluorographene is inert and stable up to 400 degrees C even in air, similar to Teflon

Journal ArticleDOI
Jörg Ederle1, Joanna Dobson1, Joanna Dobson2, Roland L Featherstone1  +348 moreInstitutions (40)
TL;DR: Completion of long-term follow-up is needed to establish the efficacy of carotid artery stenting compared with endarterectomy, but in the meantime, carotin artery stent should remain the treatment of choice for patients suitable for surgery.

Journal ArticleDOI
TL;DR: Disease-related causes, in particular pulmonary fibrosis, PAH and cardiac causes, accounted for the majority of deaths in SSc.
Abstract: Objectives To determine the causes and predictors of mortality in systemic sclerosis (SSc). Methods Patients with SSc (n=5860) fulfilling the American College of Rheumatology criteria and prospectively followed in the EULAR Scleroderma Trials and Research (EUSTAR) cohort were analysed. EUSTAR centres completed a structured questionnaire on cause of death and comorbidities. Kaplan-Meier and Cox proportional hazards models were used to analyse survival in SSc subgroups and to identify predictors of mortality. Results Questionnaires were obtained on 234 of 284 fatalities. 55% of deaths were attributed directly to SSc and 41% to non-SSc causes; in 4% the cause of death was not assigned. Of the SSc-related deaths, 35% were attributed to pulmonary fibrosis, 26% to pulmonary arterial hypertension (PAH) and 26% to cardiac causes (mainly heart failure and arrhythmias). Among the non-SSc-related causes, infections (33%) and malignancies (31%) were followed by cardiovascular causes (29%). Of the non-SSc-related fatalities, 25% died of causes in which SSc-related complications may have participated (pneumonia, sepsis and gastrointestinal haemorrhage). Independent risk factors for mortality and their HR were: proteinuria (HR 3.34), the presence of PAH based on echocardiography (HR 2.02), pulmonary restriction (forced vital capacity below 80% of normal, HR 1.64), dyspnoea above New York Heart Association class II (HR 1.61), diffusing capacity of the lung (HR 1.20 per 10% decrease), patient age at onset of Raynaud's phenomenon (HR 1.30 per 10 years) and the modified Rodnan skin score (HR 1.20 per 10 score points). Conclusion Disease-related causes, in particular pulmonary fibrosis, PAH and cardiac causes, accounted for the majority of deaths in SSc.

Book ChapterDOI
01 Jan 2010
TL;DR: In this paper, the identification and quantification of moderating effects in complex causal structures by means of Partial Least Squares Path Modeling is discussed. But the authors do not consider the effect of group comparisons.
Abstract: Along with the development of scientific disciplines, namely social sciences, hypothesized relationships become increasingly more complex. Besides the examination of direct effects, researchers are more and more interested in moderating effects. Moderating effects are evoked by variables whose variation influences the strength or the direction of a relationship between an exogenous and an endogenous variable. Investigators using partial least squares path modeling need appropriate means to test their models for such moderating effects. We illustrate the identification and quantification of moderating effects in complex causal structures by means of Partial Least Squares Path Modeling. We also show that group comparisons, i.e. comparisons of model estimates for different groups of observations, represent a special case of moderating effects by having the grouping variable as a categorical moderator variable. We provide profound answers to typical questions related to testing moderating effects within PLS path models: 1. How can a moderating effect be drawn in a PLS path model, taking into account that the available software only permits direct effects? 2. How does the type of measurement model of the independent and the moderator variables influence the detection of moderating effects? 3. Before the model estimation, should the data be prepared in a particular manner? Should the indicators be centered (by having a mean of zero), standardized (by having a mean of zero and a standard deviation of one), or manipulated in any other way? 4. How can the coefficients of moderating effects be estimated and interpreted?And, finally: 5. How can the significance of moderating effects be determined? Borrowing from the body of knowledge on modeling interaction effect within multiple regression, we develop a guideline on how to test moderating effects in PLS path models. In particular, we create a graphical representation of the necessary steps to take and decisions to make in the form of a flow chart. Starting with the analysis of the type of data available, via the measurement model specification, the flow chart leads the researcher through the decisions on how to prepare the data and how to model the moderating effect. The flow chart ends with the bootstrapping, as the preferred means to test significance, and the final interpretation of the model outcomes.

Journal ArticleDOI
TL;DR: Examples are presented to show how compartmentalization, monodispersity, single-molecule sensitivity, and high throughput have been exploited in experiments that would have been extremely difficult outside the microfluidics platform.
Abstract: Microdroplets in microfluidics offer a great number of opportunities in chemical and biological research. They provide a compartment in which species or reactions can be isolated, they are monodisperse and therefore suitable for quantitative studies, they offer the possibility to work with extremely small volumes, single cells, or single molecules, and are suitable for high-throughput experiments. The aim of this Review is to show the importance of these features in enabling new experiments in biology and chemistry. The recent advances in device fabrication are highlighted as are the remaining technological challenges. Examples are presented to show how compartmentalization, monodispersity, single-molecule sensitivity, and high throughput have been exploited in experiments that would have been extremely difficult outside the microfluidics platform.

Journal ArticleDOI
TL;DR: In this paper, an alternative and novel theoretical approach to the conceptualization and analysis of payments for environmental services (PES) is presented, taking into account complexities related to uncertainty, distributional issues, social embeddedness, and power relations.

Journal ArticleDOI
07 May 2010-Science
TL;DR: This work explores process innovations that can speed up the anammox process and use all organic matter as much as possible for energy generation.
Abstract: Organic matter must be removed from sewage to protect the quality of the water bodies that it is discharged to. Most current sewage treatment plants are aimed at removing organic matter only. They are energy-inefficient, whereas potentially the organic matter could be regarded as a source of energy. However, organic carbon is not the only pollutant in sewage: Fixed nitrogen such as ammonium (NH4+) and nitrate (NO3−) must be removed to avoid toxic algal blooms in the environment. Conventional wastewater treatment systems for nitrogen removal require a lot of energy to create aerobic conditions for bacterial nitrification, and also use organic carbon to help remove nitrate by bacterial denitrification (see the figure). An alternative approach is the use of anoxic ammonium-oxidizing (anammox) bacteria, which require less energy ( 1 ) but grow relatively slowly. We explore process innovations that can speed up the anammox process and use all organic matter as much as possible for energy generation.

Journal ArticleDOI
Amy Strange1, Francesca Capon2, Chris C. A. Spencer1, Jo Knight, Michael E. Weale2, Michael H. Allen2, Anne Barton3, Gavin Band1, Céline Bellenguez1, Judith G.M. Bergboer4, Jenefer M. Blackwell, Elvira Bramon, Suzannah Bumpstead5, Juan P. Casas6, Michael J. Cork7, Aiden Corvin8, Panos Deloukas5, Alexander T. Dilthey1, Audrey Duncanson9, Sarah Edkins5, Xavier Estivill, Oliver FitzGerald, Colin Freeman9, Emiliano Giardina, Emma Gray5, Angelika Hofer10, Ulrike Hüffmeier11, Sarah E. Hunt5, Alan D. Irvine8, Janusz Jankowski12, Brian Kirby, Cordelia Langford5, Jesús Lascorz, Joyce Leman13, Stephen Leslie1, Lotus Mallbris14, Hugh S. Markus15, Christopher G. Mathew2, W.H. Irwin McLean16, Ross McManus8, Rotraut Mössner17, Loukas Moutsianas1, Åsa Torinsson Naluai18, Frank O. Nestle, Giuseppe Novelli, Alexandros Onoufriadis2, Colin N. A. Palmer16, Carlo Perricone19, Matti Pirinen1, Robert Plomin2, Simon C. Potter5, Ramon M. Pujol, Anna Rautanen9, Eva Riveira-Muñoz, Anthony W. Ryan8, Wolfgang Salmhofer10, Lena Samuelsson18, Stephen Sawcer20, Joost Schalkwijk4, Catherine H. Smith, Mona Ståhle14, Zhan Su9, Rachid Tazi-Ahnini7, Heiko Traupe21, Ananth C. Viswanathan22, Ananth C. Viswanathan23, Richard B. Warren3, Wolfgang Weger10, Katarina Wolk14, Nicholas W. Wood, Jane Worthington3, Helen S. Young3, Patrick L.J.M. Zeeuwen4, Adrian Hayday, A. David Burden, Christopher E.M. Griffiths3, Juha Kere, André Reis11, Gilean McVean1, David M. Evans24, Matthew A. Brown, Jonathan Barker, Leena Peltonen5, Peter Donnelly9, Peter Donnelly1, Richard C. Trembath 
TL;DR: These findings implicate pathways that integrate epidermal barrier dysfunction with innate and adaptive immune dysregulation in psoriasis pathogenesis and report compelling evidence for an interaction between the HLA-C and ERAP1 loci.
Abstract: To identify new susceptibility loci for psoriasis, we undertook a genome-wide association study of 594,224 SNPs in 2,622 individuals with psoriasis and 5,667 controls. We identified associations at eight previously unreported genomic loci. Seven loci harbored genes with recognized immune functions (IL28RA, REL, IFIH1, ERAP1, TRAF3IP2, NFKBIA and TYK2). These associations were replicated in 9,079 European samples (six loci with a combined P < 5 × 10⁻⁸ and two loci with a combined P < 5 × 10⁻⁷). We also report compelling evidence for an interaction between the HLA-C and ERAP1 loci (combined P = 6.95 × 10⁻⁶). ERAP1 plays an important role in MHC class I peptide processing. ERAP1 variants only influenced psoriasis susceptibility in individuals carrying the HLA-C risk allele. Our findings implicate pathways that integrate epidermal barrier dysfunction with innate and adaptive immune dysregulation in psoriasis pathogenesis.

Journal ArticleDOI
TL;DR: This review shows that the psychometric properties of the SDQ are strong, particularly for the teacher version, which implies that the use of theSDQ as a screening instrument should be continued and longitudinal research studies should investigate predictive validity.
Abstract: Since its development, the Strengths and Difficulties Questionnaire (SDQ) has been widely used in both research and practice. The SDQ screens for positive and negative psychological attributes. This review aims to provide an overview of the psychometric properties of the SDQ for 4- to 12-year-olds. Results from 48 studies (N = 131,223) on reliability and validity of the parent and teacher SDQ are summarized quantitatively and descriptively. Internal consistency, test-retest reliability, and inter-rater agreement are satisfactory for the parent and teacher versions. At subscale level, the reliability of the teacher version seemed stronger compared to that of the parent version. Concerning validity, 15 out of 18 studies confirmed the five-factor structure. Correlations with other measures of psychopathology as well as the screening ability of the SDQ are sufficient. This review shows that the psychometric properties of the SDQ are strong, particularly for the teacher version. For practice, this implies that the use of the SDQ as a screening instrument should be continued. Longitudinal research studies should investigate predictive validity. For both practice and research, we emphasize the use of a multi-informant approach.

Journal ArticleDOI
TL;DR: In this article, the authors compared four PLS-based approaches: a product indicator approach, a 2-stage approach (Chin et al., 2003), a hybrid approach (Wold, 1982), and an orthogonalizing approach (Little, Bovaird, & Widaman, 2006).
Abstract: In social and business sciences, the importance of the analysis of interaction effects between manifest as well as latent variables steadily increases. Researchers using partial least squares (PLS) to analyze interaction effects between latent variables need an overview of the available approaches as well as their suitability. This article presents 4 PLS-based approaches: a product indicator approach (Chin, Marcolin, & Newsted, 2003), a 2-stage approach (Chin et al., 2003; Henseler & Fassott, in press), a hybrid approach (Wold, 1982), and an orthogonalizing approach (Little, Bovaird, & Widaman, 2006), and contrasts them using data related to a technology acceptance model. By means of a more extensive Monte Carlo experiment, the different approaches are compared in terms of their point estimate accuracy, their statistical power, and their prediction accuracy. Based on the results of the experiment, the use of the orthogonalizing approach is recommendable under most circumstances. Only if the orthogonalizing approach does not find a significant interaction effect, the 2-stage approach should be additionally used for significance test, because it has a higher statistical power. For prediction accuracy, the orthogonalizing and the product indicator approach provide a significantly and substantially more accurate prediction than the other two approaches. Among these two, the orthogonalizing approach should be used in case of small sample size and few indicators per construct. If the sample size or the number of indicators per construct is medium to large, the product indicator approach should be used.

Journal ArticleDOI
19 Mar 2010-Cell
TL;DR: The new era of anti-inflammatory agents includes "biologicals" such as anticytokine therapies and small molecules that block the activity of kinases and small RNAs.

Journal ArticleDOI
TL;DR: The physics of graphene is acting as a bridge between quantum field theory and condensed matter physics due to the special quality of the graphene quasiparticles behaving as massless two dimensional Dirac fermions.