scispace - formally typeset
Search or ask a question

Showing papers by "University of Milan published in 2012"


Journal ArticleDOI
Georges Aad1, T. Abajyan2, Brad Abbott3, Jalal Abdallah4  +2964 moreInstitutions (200)
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.

9,282 citations


Journal ArticleDOI
TL;DR: In this paper, results from searches for the standard model Higgs boson in proton-proton collisions at 7 and 8 TeV in the CMS experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.8 standard deviations.

8,857 citations


Journal ArticleDOI
TL;DR: These guidelines are presented for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.

4,316 citations


Journal ArticleDOI
TL;DR: In this article, the authors deal with the fractional Sobolev spaces W s;p and analyze the relations among some of their possible denitions and their role in the trace theory.
Abstract: This paper deals with the fractional Sobolev spaces W s;p . We analyze the relations among some of their possible denitions and their role in the trace theory. We prove continuous and compact embeddings, investigating the problem of the extension domains and other regularity results. Most of the results we present here are probably well known to the experts, but we believe that our proofs are original and we do not make use of any interpolation techniques nor pass through the theory of Besov spaces. We also present some counterexamples in non-Lipschitz domains.

3,555 citations


Book
12 Dec 2012
TL;DR: In this article, the authors focus on regret analysis in the context of multi-armed bandit problems, where regret is defined as the balance between staying with the option that gave highest payoff in the past and exploring new options that might give higher payoffs in the future.
Abstract: A multi-armed bandit problem - or, simply, a bandit problem - is a sequential allocation problem defined by a set of actions. At each time step, a unit resource is allocated to an action and some observable payoff is obtained. The goal is to maximize the total payoff obtained in a sequence of allocations. The name bandit refers to the colloquial term for a slot machine (a "one-armed bandit" in American slang). In a casino, a sequential allocation problem is obtained when the player is facing many slot machines at once (a "multi-armed bandit"), and must repeatedly choose where to insert the next coin. Multi-armed bandit problems are the most basic examples of sequential decision problems with an exploration-exploitation trade-off. This is the balance between staying with the option that gave highest payoffs in the past and exploring new options that might give higher payoffs in the future. Although the study of bandit problems dates back to the 1930s, exploration-exploitation trade-offs arise in several modern applications, such as ad placement, website optimization, and packet routing. Mathematically, a multi-armed bandit is defined by the payoff process associated with each option. In this book, the focus is on two extreme cases in which the analysis of regret is particularly simple and elegant: independent and identically distributed payoffs and adversarial payoffs. Besides the basic setting of finitely many actions, it also analyzes some of the most important variants and extensions, such as the contextual bandit model. This monograph is an ideal reference for students and researchers with an interest in bandit problems.

2,427 citations


Journal ArticleDOI
TL;DR: It is shown that CEM possesses a wide range of statistical properties not available in most other matching methods but is at the same time exceptionally easy to comprehend and use.
Abstract: We discuss a method for improving causal inferences called ‘‘Coarsened Exact Matching’’ (CEM), and the new ‘‘Monotonic Imbalance Bounding’’ (MIB) class of matching methods from which CEM is derived. We summarize what is known about CEM and MIB, derive and illustrate several new desirable statistical properties of CEM, and then propose a variety of useful extensions. We show that CEM possesses a wide range of statistical properties not available in most other matching methods but is at the same time exceptionally easy to comprehend and use. We focus on the connection between theoretical properties and practical applications. We also make available easy-to-use open source software for R, Stata, and SPSS that implement all our suggestions.

2,425 citations


Journal ArticleDOI
TL;DR: This article conducted a meta-analysis of genetic variants on the Metabochip, including 34,840 cases and 114,981 controls, overwhelmingly of European descent, and identified ten previously unreported T2D susceptibility loci, including two showing sex-differentiated association.
Abstract: To extend understanding of the genetic architecture and molecular basis of type 2 diabetes (T2D), we conducted a meta-analysis of genetic variants on the Metabochip, including 34,840 cases and 114,981 controls, overwhelmingly of European descent. We identified ten previously unreported T2D susceptibility loci, including two showing sex-differentiated association. Genome-wide analyses of these data are consistent with a long tail of additional common variant loci explaining much of the variation in susceptibility to T2D. Exploration of the enlarged set of susceptibility loci implicates several processes, including CREBBP-related transcription, adipocytokine signaling and cell cycle regulation, in diabetes pathogenesis.

1,899 citations


Journal ArticleDOI
TL;DR: It is demonstrated that black TiO(2) nanoparticles obtained through a one-step reduction/crystallization process exhibit a bandgap of only 1.85 eV, which matches well with visible light absorption.
Abstract: The increasing need for new materials capable of solar fuel generation is central in the development of a green energy economy. In this contribution, we demonstrate that black TiO2 nanoparticles obtained through a one-step reduction/crystallization process exhibit a bandgap of only 1.85 eV, which matches well with visible light absorption. The electronic structure of black TiO2 nanoparticles is determined by the unique crystalline and defective core/disordered shell morphology. We introduce new insights that will be useful for the design of nanostructured photocatalysts for energy applications.

1,403 citations


Journal ArticleDOI
TL;DR: Findings provide evidence for the short-term benefits of minimally invasive oesophagectomy for patients with resectable Oesophageal cancer.

1,285 citations


Book ChapterDOI
Wil M. P. van der Aalst1, Wil M. P. van der Aalst2, A Arya Adriansyah1, Ana Karla Alves de Medeiros3, Franco Arcieri4, Thomas Baier5, Tobias Blickle6, Jagadeesh Chandra Bose1, Peter van den Brand, Ronald Brandtjen, Joos C. A. M. Buijs1, Andrea Burattin7, Josep Carmona8, Malu Castellanos9, Jan Claes10, Jonathan Cook11, Nicola Costantini, Francisco Curbera12, Ernesto Damiani13, Massimiliano de Leoni1, Pavlos Delias, Boudewijn F. van Dongen1, Marlon Dumas14, Schahram Dustdar15, Dirk Fahland1, Diogo R. Ferreira16, Walid Gaaloul17, Frank van Geffen18, Sukriti Goel19, CW Christian Günther, Antonella Guzzo20, Paul Harmon, Arthur H. M. ter Hofstede2, Arthur H. M. ter Hofstede1, John Hoogland, Jon Espen Ingvaldsen, Koki Kato21, Rudolf Kuhn, Akhil Kumar22, Marcello La Rosa2, Fabrizio Maria Maggi1, Donato Malerba23, RS Ronny Mans1, Alberto Manuel, Martin McCreesh, Paola Mello24, Jan Mendling25, Marco Montali26, Hamid Reza Motahari-Nezhad9, Michael zur Muehlen27, Jorge Munoz-Gama8, Luigi Pontieri28, Joel Ribeiro1, A Anne Rozinat, Hugo Seguel Pérez, Ricardo Seguel Pérez, Marcos Sepúlveda29, Jim Sinur, Pnina Soffer30, Minseok Song31, Alessandro Sperduti7, Giovanni Stilo4, Casper Stoel, Keith D. Swenson21, Maurizio Talamo4, Wei Tan12, Christopher Turner32, Jan Vanthienen33, George Varvaressos, Eric Verbeek1, Marc Verdonk34, Roberto Vigo, Jianmin Wang35, Barbara Weber36, Matthias Weidlich37, Ton Weijters1, Lijie Wen35, Michael Westergaard1, Moe Thandar Wynn2 
01 Jan 2012
TL;DR: This manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users to increase the maturity of process mining as a new tool to improve the design, control, and support of operational business processes.
Abstract: Process mining techniques are able to extract knowledge from event logs commonly available in today’s information systems. These techniques provide new means to discover, monitor, and improve processes in a variety of application domains. There are two main drivers for the growing interest in process mining. On the one hand, more and more events are being recorded, thus, providing detailed information about the history of processes. On the other hand, there is a need to improve and support business processes in competitive and rapidly changing environments. This manifesto is created by the IEEE Task Force on Process Mining and aims to promote the topic of process mining. Moreover, by defining a set of guiding principles and listing important challenges, this manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users. The goal is to increase the maturity of process mining as a new tool to improve the (re)design, control, and support of operational business processes.

1,135 citations


Journal ArticleDOI
TL;DR: Understanding of the mechanisms by which stress and glucocorticoids affect glutamate transmission provides insights into normal brain functioning, as well as the pathophysiology and potential new treatments of stress-related neuropsychiatric disorders.
Abstract: Mounting evidence suggests that acute and chronic stress, especially the stress-induced release of glucocorticoids, induces changes in glutamate neurotransmission in the prefrontal cortex and the hippocampus, thereby influencing some aspects of cognitive processing. In addition, dysfunction of glutamatergic neurotransmission is increasingly considered to be a core feature of stress-related mental illnesses. Recent studies have shed light on the mechanisms by which stress and glucocorticoids affect glutamate transmission, including effects on glutamate release, glutamate receptors and glutamate clearance and metabolism. This new understanding provides insights into normal brain functioning, as well as the pathophysiology and potential new treatments of stress-related neuropsychiatric disorders.

Journal ArticleDOI
TL;DR: This panel addressed some of the limitations of the prior ARDS definition by incorporating current data, physiologic concepts, and clinical trials results to develop the Berlin definition, which should facilitate case recognition and better match treatment options to severity in both research trials and clinical practice.
Abstract: Our objective was to revise the definition of acute respiratory distress syndrome (ARDS) using a conceptual model incorporating reliability and validity, and a novel iterative approach with formal evaluation of the definition. The European Society of Intensive Care Medicine identified three chairs with broad expertise in ARDS who selected the participants and created the agenda. After 2 days of consensus discussions a draft definition was developed, which then underwent empiric evaluation followed by consensus revision. The Berlin Definition of ARDS maintains a link to prior definitions with diagnostic criteria of timing, chest imaging, origin of edema, and hypoxemia. Patients may have ARDS if the onset is within 1 week of a known clinical insult or new/worsening respiratory symptoms. For the bilateral opacities on chest radiograph criterion, a reference set of chest radiographs has been developed to enhance inter-observer reliability. The pulmonary artery wedge pressure criterion for hydrostatic edema was removed, and illustrative vignettes were created to guide judgments about the primary cause of respiratory failure. If no risk factor for ARDS is apparent, however, objective evaluation (e.g., echocardiography) is required to help rule out hydrostatic edema. A minimum level of positive end-expiratory pressure and mutually exclusive PaO2/FiO2 thresholds were chosen for the different levels of ARDS severity (mild, moderate, severe) to better categorize patients with different outcomes and potential responses to therapy. This panel addressed some of the limitations of the prior ARDS definition by incorporating current data, physiologic concepts, and clinical trials results to develop the Berlin definition, which should facilitate case recognition and better match treatment options to severity in both research trials and clinical practice.

Journal ArticleDOI
TL;DR: An overview about biological applications of magnetic colloidal nanoparticles will be given, which comprises their synthesis, characterization, and in vitro and in vivo applications, to address the remaining challenges for an extended application of magnetic nanoparticles in medicine.
Abstract: In this review an overview about biological applications of magnetic colloidal nanoparticles will be given, which comprises their synthesis, characterization, and in vitro and in vivo applications. The potential future role of magnetic nanoparticles compared to other functional nanoparticles will be discussed by highlighting the possibility of integration with other nanostructures and with existing biotechnology as well as by pointing out the specific properties of magnetic colloids. Current limitations in the fabrication process and issues related with the outcome of the particles in the body will be also pointed out in order to address the remaining challenges for an extended application of magnetic nanoparticles in medicine.

Journal ArticleDOI
TL;DR: The evidence base for the diagnosis and management of amyotrophic lateral sclerosis (ALS) is weak and needs to be strengthened, according to the authors.
Abstract: Background: The evidence base for the diagnosis and management of amyotrophic lateral sclerosis (ALS) is weak. Objectives: To provide evidence-based or expert recommendations for the diagnosis and ...

Journal ArticleDOI
TL;DR: A paradigm shift from a monoamine hypothesis of depression to a neuroplasticity hypothesis focused on glutamate may represent a substantial advancement in the working hypothesis that drives research for new drugs and therapies.

Journal ArticleDOI
29 Mar 2012
TL;DR: In this article, the authors reported results from searches for the standard model Higgs boson in proton-proton collisions at square root(s) = 7 TeV in five decay modes: gamma pair, b-quark pair, tau lepton pair, W pair, and Z pair.
Abstract: Combined results are reported from searches for the standard model Higgs boson in proton-proton collisions at sqrt(s)=7 TeV in five Higgs boson decay modes: gamma pair, b-quark pair, tau lepton pair, W pair, and Z pair. The explored Higgs boson mass range is 110-600 GeV. The analysed data correspond to an integrated luminosity of 4.6-4.8 inverse femtobarns. The expected excluded mass range in the absence of the standard model Higgs boson is 118-543 GeV at 95% CL. The observed results exclude the standard model Higgs boson in the mass range 127-600 GeV at 95% CL, and in the mass range 129-525 GeV at 99% CL. An excess of events above the expected standard model background is observed at the low end of the explored mass range making the observed limits weaker than expected in the absence of a signal. The largest excess, with a local significance of 3.1 sigma, is observed for a Higgs boson mass hypothesis of 124 GeV. The global significance of observing an excess with a local significance greater than 3.1 sigma anywhere in the search range 110-600 (110-145) GeV is estimated to be 1.5 sigma (2.1 sigma). More data are required to ascertain the origin of this excess.

Journal ArticleDOI
TL;DR: The short-term reductions in cancer incidence and mortality and the decrease in risk of major extracranial bleeds with extended use, and their low case-fatality, add to the case for daily aspirin in prevention of cancer.

Journal ArticleDOI
Nobuyuki Hamajima, Kaoru Hirose, K. Tajima, T E Rohan1  +289 moreInstitutions (81)
TL;DR: The effects of menarche and menopause on breast cancer risk might not be acting merely by lengthening women's total number of reproductive years, and endogenous ovarian hormones are more relevant for oestrogen receptor-positive disease than for ostrogens receptor-negative disease and for lobular than for ductal tumours.
Abstract: BACKGROUND:Menarche and menopause mark the onset and cessation, respectively, of ovarian activity associated with reproduction, and affect breast cancer risk. Our aim was to assess the strengths of their effects and determine whether they depend on characteristics of the tumours or the affected women.METHODS:Individual data from 117 epidemiological studies, including 118 964 women with invasive breast cancer and 306 091 without the disease, none of whom had used menopausal hormone therapy, were included in the analyses. We calculated adjusted relative risks (RRs) associated with menarche and menopause for breast cancer overall, and by tumour histology and by oestrogen receptor expression.FINDINGS:Breast cancer risk increased by a factor of 1·050 (95% CI 1·044-1·057; p<0·0001) for every year younger at menarche, and independently by a smaller amount (1·029, 1·025-1·032; p<0·0001), for every year older at menopause. Premenopausal women had a greater risk of breast cancer than postmenopausal women of an identical age (RR at age 45-54 years 1·43, 1·33-1·52, p<0·001). All three of these associations were attenuated by increasing adiposity among postmenopausal women, but did not vary materially by women's year of birth, ethnic origin, childbearing history, smoking, alcohol consumption, or hormonal contraceptive use. All three associations were stronger for lobular than for ductal tumours (p<0·006 for each comparison). The effect of menopause in women of an identical age and trends by age at menopause were stronger for oestrogen receptor-positive disease than for oestrogen receptor-negative disease (p<0·01 for both comparisons).INTERPRETATION:The effects of menarche and menopause on breast cancer risk might not be acting merely by lengthening women's total number of reproductive years. Endogenous ovarian hormones are more relevant for oestrogen receptor-positive disease than for oestrogen receptor-negative disease and for lobular than for ductal tumours.

Journal ArticleDOI
TL;DR: In this paper, a prospective, multi-centre, questionnaire-based survey measured costs and quality of life in ambulatory care and in 12 tertiary care centres in 10 countries.
Abstract: methods: A prospective, multi-centre, questionnaire-based survey measured costs and quality of life in ambulatory care and in 12 tertiary care centres in 10 countries. The study enrolled women with a diagnosis of endometriosis and with at least one centre-specific contact related to endometriosis-associated symptoms in 2008. The main outcome measures were health care costs, costs of productivity loss, total costs and quality-adjusted life years. Predictors of costs were identified using regression analysis. results: Data analysis of 909 women demonstrated that the average annual total cost per woman was E9579 (95% confidence interval E8559–E10 599). Costs of productivity loss of E6298 per woman were double the health care costs of E3113 per woman. Health care costs were mainly due to surgery (29%), monitoring tests (19%) and hospitalization (18%) and physician visits (16%). Endometriosis-associated symptoms generated 0.809 quality-adjusted life years per woman. Decreased quality of life was the most important predictor of direct health care and total costs. Costs were greater with increasing severity of endometriosis, presence of pelvic pain, presence of infertility and a higher number of years since diagnosis. conclusions: Our study invited women to report resource use based on endometriosis-associated symptoms only, rather than drawing on a control population of women without endometriosis. Our study showed that the economic burden associated with endometriosis treated in referral centres is high and is similar to other chronic diseases (diabetes, Crohn’s disease, rheumatoid arthritis). It arises predominantly from productivity loss, and is predicted by decreased quality of life.

Journal ArticleDOI
TL;DR: The objective of this guideline is to provide healthcare professionals with clear, up-to-date, and practical guidance on the management of TTP and relatedThrombotic microangiopathies, defined by thrombocytopenia, microangypathic haemolytic anaemia (MAHA) and small vessel thromBosis.
Abstract: related to the subsections of this guideline. The writing group produced the draft guideline, which was subsequently revised by consensus by members of the Haemostasis and Thrombosis Task Force of the BCSH. The guideline was then reviewed by a sounding board of British haematologists, the BCSH and the British Society for Haematology Committee and comments incorporated where appropriate. The ‘GRADE’ system was used to quote levels and grades of evidence, details of which can be found at http://www.bcshguidelines.com. The objective of this guideline is to provide healthcare professionals with clear, up-to-date, and practical guidance on the management of TTP and related thrombotic microangiopathies, defined by thrombocytopenia, microangiopathic haemolytic anaemia (MAHA) and small vessel thrombosis.

Journal ArticleDOI
TL;DR: It is found that Ahr−/− mice had a considerable deficit in ILC22 cells that resulted in less secretion of IL-22 and inadequate protection against intestinal bacterial infection.
Abstract: Innate lymphoid cells (ILCs) of the ILC22 type protect the intestinal mucosa from infection by secreting interleukin 22 (IL-22). ILC22 cells include NKp46(+) and lymphoid tissue-inducer (LTi)-like subsets that express the aryl hydrocarbon receptor (AHR). Here we found that Ahr(-/-) mice had a considerable deficit in ILC22 cells that resulted in less secretion of IL-22 and inadequate protection against intestinal bacterial infection. Ahr(-/-) mice also lacked postnatally 'imprinted' cryptopatches and isolated lymphoid follicles (ILFs), but not embryonically 'imprinted' Peyer's patches. AHR induced the transcription factor Notch, which was required for NKp46(+) ILCs, whereas LTi-like ILCs, cryptopatches and ILFs were partially dependent on Notch signaling. Thus, AHR was essential for ILC22 cells and postnatal intestinal lymphoid tissues. Moreover, ILC22 subsets were heterogeneous in their requirement for Notch and their effect on the generation of intestinal lymphoid tissues.

Journal ArticleDOI
J. P. Lees1, V. Poireau1, V. Tisserand1, J. Garra Tico2  +362 moreInstitutions (77)
TL;DR: In this article, the BaBar data sample was used to investigate the sensitivity of BaBar ratios to new physics contributions in the form of a charged Higgs boson in the type II two-Higgs doublet model.
Abstract: Based on the full BaBar data sample, we report improved measurements of the ratios R(D(*)) = B(B -> D(*) Tau Nu)/B(B -> D(*) l Nu), where l is either e or mu. These ratios are sensitive to new physics contributions in the form of a charged Higgs boson. We measure R(D) = 0.440 +- 0.058 +- 0.042 and R(D*) = 0.332 +- 0.024 +- 0.018, which exceed the Standard Model expectations by 2.0 sigma and 2.7 sigma, respectively. Taken together, our results disagree with these expectations at the 3.4 sigma level. This excess cannot be explained by a charged Higgs boson in the type II two-Higgs-doublet model. We also report the observation of the decay B -> D Tau Nu, with a significance of 6.8 sigma.

Journal ArticleDOI
TL;DR: In this article, the existence of solutions for equations driven by a non-local integrodifferential operator with homogeneous Dirichlet boundary conditions was studied and a nonlinear solution for them using the Mountain Pass Theorem was found.

Journal ArticleDOI
23 Aug 2012-Nature
TL;DR: It is shown that overexpression of PSTOL1 in locally adapted rice varieties significantly enhances grain yield in phosphorus-deficient soil and acts as an enhancer of early root growth, thereby enabling plants to acquire more phosphorus and other nutrients.
Abstract: As an essential macroelement for all living cells, phosphorus is indispensable in agricultural production systems. Natural phosphorus reserves are limited, and it is therefore important to develop phosphorus-efficient crops. A major quantitative trait locus for phosphorus-deficiency tolerance, Pup1, was identified in the traditional aus-type rice variety Kasalath about a decade ago. However, its functional mechanism remained elusive until the locus was sequenced, showing the presence of a Pup1-specific protein kinase gene, which we have named phosphorus-starvation tolerance 1 (PSTOL1). This gene is absent from the rice reference genome and other phosphorus-starvation-intolerant modern varieties. Here we show that overexpression of PSTOL1 in such varieties significantly enhances grain yield in phosphorus-deficient soil. Further analyses show that PSTOL1 acts as an enhancer of early root growth, thereby enabling plants to acquire more phosphorus and other nutrients. The absence of PSTOL1 and other genes-for example, the submergence-tolerance gene SUB1A-from modern rice varieties underlines the importance of conserving and exploring traditional germplasm. Introgression of this quantitative trait locus into locally adapted rice varieties in Asia and Africa is expected to considerably enhance productivity under low phosphorus conditions.

Journal ArticleDOI
TL;DR: The existing evidence for treatment of atopic eczema (atopic dermatitis, AE) is evaluated using the national standard Appraisal of Guidelines Research and Evaluation as discussed by the authors.
Abstract: The existing evidence for treatment of atopic eczema (atopic dermatitis, AE) is evaluated using the national standard Appraisal of Guidelines Research and Evaluation. The consensus process consisted of a nominal group process and a DELPHI procedure. Management of AE must consider the individual symptomatic variability of the disease. Basic therapy is focused on hydrating topical treatment, and avoidance of specific and unspecific provocation factors. Anti-inflammatory treatment based on topical glucocorticosteroids and topical calcineurin inhibitors (TCI) is used for exacerbation management and more recently for proactive therapy in selected cases. Topical corticosteroids remain the mainstay of therapy, but the TCI tacrolimus and pimecrolimus are preferred in certain locations. Systemic immune-suppressive treatment is an option for severe refractory cases. Microbial colonization and superinfection may induce disease exacerbation and can justify additional antimicrobial treatment. Adjuvant therapy includes UV irradiation preferably with UVA1 wavelength or UVB 311 nm. Dietary recommendations should be specific and given only in diagnosed individual food allergy. Allergen-specific immunotherapy to aeroallergens may be useful in selected cases. Stress-induced exacerbations may make psychosomatic counselling recommendable. 'Eczema school' educational programs have been proven to be helpful. Pruritus is targeted with the majority of the recommended therapies, but some patients need additional antipruritic therapies.

Journal ArticleDOI
TL;DR: In this article, a hybrid model was used to forecast the climate-driven spatio-temporal dynamics of 150 high-mountain plant species across the European Alps, which predicts average range size reductions of 44-50% by the end of the twenty-first century, which is similar to projections from the most optimistic static model.
Abstract: Quantitative estimates of the range loss of mountain plants under climate change have so far mostly relied on static geographical projections of species’ habitat shifts 1‐3 . Here, we use a hybrid model 4 that combines such projections with simulations of demography and seed dispersal to forecast the climate-driven spatio-temporal dynamics of 150 highmountain plant species across the European Alps. This model predicts average range size reductions of 44‐50% by the end of the twenty-first century, which is similar to projections from the most ‘optimistic’ static model (49%). However, the hybrid model also indicates that population dynamics will lag behind climatic trends and that an average of 40% of the range still occupied at the end of the twenty-first century will have become climatically unsuitable for the respective species, creating an extinction debt 5,6 . Alarmingly, species endemic to the Alps seem to face the highest range losses. These results caution against optimistic conclusions from moderate range size reductions observed during the twenty-first century as they are likely to belie more severe longer-term effects of climate warming on mountain plants. Many plant and animal species have already been shifting their ranges in response to the past century’s climatic trends 79 . In mountains, owing to the altitudinal temperature gradient, species should primarily move upslope under warming, as has indeed been frequently documented during the recent decades 10,11 as well as in the palaeorecord 12,13 . As mountains usually have conical shapes, upslope movement inevitably results in range loss and may even lead to ‘mountain-top extinctions’ 14 in extreme cases. However, previous predictions of the magnitude of such range and biodiversity losses during the twenty-first century have been criticized 4,15 for relying on static ‘niche-based’ modelling approaches 16 , which disregard several processes crucial to range

Journal ArticleDOI
TL;DR: The primary target audience of this position paper is clinicians who have limited orientation with CPX but whose caregiving would be enhanced by familiarity and application of this assessment, and a series of forms designed to highlight the utility of CPX in clinical decision-making.
Abstract: From an evidence-based perspective, cardiopulmonary exercise testing (CPX) is a well-supported assessment technique in both the United States (US) and Europe. The combination of standard exercise testing (ET) (ie, progressive exercise provocation in association with serial electrocardiograms [ECG], hemodynamics, oxygen saturation, and subjective symptoms) and measurement of ventilatory gas exchange amounts to a superior method to: 1) accurately quantify cardiorespiratory fitness (CRF), 2) delineate the physiologic system(s) underlying exercise responses, which can be applied as a means to identify the exercise-limiting pathophysiologic mechanism(s) and/or performance differences, and 3) formulate function-based prognostic stratification. Cardiopulmonary ET certainly carries an additional cost as well as competency requirements and is not an essential component of evaluation in all patient populations. However, there are several conditions of confirmed, suspected, or unknown etiology where the data gained from this form of ET is highly valuable in terms of clinical decision making.1 Several CPX statements have been published by well-respected organizations in both the US and Europe.1–5 Despite these prominent reports and the plethora of pertinent medical literature which they feature, underutilization of CPX persists. This discrepancy is at least partly attributable to the fact that the currently available CPX consensus statements are inherently complex and fail to convey succinct, clinically centered strategies to utilize CPX indices effectively. Likewise, current CPX software packages generate an overwhelming abundance of data, which to most clinicians are incomprehensible and abstract. Ironically, in contrast to the protracted scientific statements and dense CPX data outputs, the list of CPX variables that have proven clinical application is concise and uncomplicated. Therefore, the goal of this writing group is to present an approach of CPX in a way that assists in making meaningful decisions regarding a patient’s care. Experts from the European Association for Cardiovascular Prevention and Rehabilitation and American Heart Association have joined in this effort to distill easy-to-follow guidance on CPX interpretation based upon current scientific evidence. This document also provides a series of forms that are designed to highlight the utility of CPX in clinical decision-making. Not only will this improve patient management, it will also catalyze uniform and unambiguous data interpretation across laboratories on an international level. The primary target audience of this position paper is clinicians who have limited orientation with CPX but whose caregiving would be enhanced by familiarity and application of this assessment. The ultimate goal is to increase awareness of the value of CPX and to increase the number of healthcare professionals who are able to perform clinically meaningful CPX interpretation. Moreover, this document will hopefully lead to an increase in appropriate patient referrals to CPX with enhanced efficiencies in patient management. For more detailed information on CPX, including procedures for patient preparation, equipment calibration, and conducting the test, readers are encouraged to review other publications that address these and other topics in great detail.1–5

Journal ArticleDOI
TL;DR: Inflammatory cells and mediators are an essential component of the tumor microenvironment andsection of the diversity of cancer-related inflammation is instrumental to the design of therapeutic approaches that target cancer- related inflammation.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, S. Abdel Khalek  +3081 moreInstitutions (197)
TL;DR: A combined search for the Standard Model Higgs boson with the ATLAS experiment at the LHC using datasets corresponding to integrated luminosities from 1.04 fb(-1) to 4.9 fb(1) of pp collisions is described in this paper.

Journal ArticleDOI
TL;DR: In this article, the performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at the LHC in 2010.
Abstract: The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta)<2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.