Showing papers by "University of Bologna published in 2012"
••
TL;DR: In this article, a search for the Standard Model Higgs boson in proton-proton collisions with the ATLAS detector at the LHC is presented, which has a significance of 5.9 standard deviations, corresponding to a background fluctuation probability of 1.7×10−9.
9,282 citations
••
TL;DR: In this paper, results from searches for the standard model Higgs boson in proton-proton collisions at 7 and 8 TeV in the CMS experiment at the LHC, using data samples corresponding to integrated luminosities of up to 5.8 standard deviations.
8,857 citations
••
TL;DR: In this 4th edition of the Maastricht consensus report aspects related to the clinical role of H pylori were looked at again in 2010, with recommendations to guide doctors involved in the management of this infection associated with various clinical conditions.
Abstract: Management of Helicobacter pylori infection is evolving and in this 4th edition of the Maastricht consensus report aspects related to the clinical role of H pylori were looked at again in 2010. In the 4th Maastricht/Florence Consensus Conference 44 experts from 24 countries took active part and examined key clinical aspects in three subdivided workshops: (1) Indications and contraindications for diagnosis and treatment, focusing on dyspepsia, non-steroidal anti-inflammatory drugs or aspirin use, gastro-oesophageal reflux disease and extraintestinal manifestations of the infection. (2) Diagnostic tests and treatment of infection. (3) Prevention of gastric cancer and other complications. The results of the individual workshops were submitted to a final consensus voting to all participants. Recommendations are provided on the basis of the best current evidence and plausibility to guide doctors involved in the management of this infection associated with various clinical conditions.
2,167 citations
••
Columbia University1, University of Amsterdam2, University of California, Los Angeles3, University of Coimbra4, University of Zurich5, University of Mainz6, University of Münster7, University of Nantes8, Weizmann Institute of Science9, Shanghai Jiao Tong University10, University of Bologna11, Max Planck Society12, Purdue University13, Rice University14
TL;DR: A search for particle dark matter with the XENON100 experiment, operated at the Laboratori Nazionali del Gran Sasso for 13 months during 2011 and 2012, has yielded no evidence for dark matter interactions.
Abstract: We report on a search for particle dark matter with the XENON100 experiment, operated at the Laboratori Nazionali del Gran Sasso (LNGS) for 13 months during 2011 and 2012. XENON100 features an ultra-low electromagnetic background of (5.3\pm0.6)\times10^-3 events (kg day keVee)^-1 in the energy region of interest. A blind analysis of 224.6 live days \times 34 kg exposure has yielded no evidence for dark matter interactions. The two candidate events observed in the pre-defined nuclear recoil energy range of 6.6-30.5 keVnr are consistent with the background expectation of (1.0 \pm 0.2) events. A Profile Likelihood analysis using a 6.6-43.3 keVnr energy range sets the most stringent limit on the spin-independent elastic WIMP-nucleon scattering cross section for WIMP masses above 8 GeV/c^2, with a minimum of 2 \times 10^-45 cm^2 at 55 GeV/c^2 and 90% confidence level.
1,624 citations
••
TL;DR: The state of the art, explaining the science of smart cities is defined and seven project areas are proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelled Urban Land Use, Transport and Economic Interactions, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the smart city.
Abstract: Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science of smart cities.
1,616 citations
••
Imperial College London1, Copenhagen Business School2, University of Gothenburg3, Tilburg University4, Royal Institute of Technology5, Spanish National Research Council6, University of Bologna7, University of Turin8, University of Cambridge9, University of Kassel10, University of Strasbourg11, University of Bordeaux12, Bocconi University13, University of Bath14
TL;DR: In this paper, the authors present a systematic review of research on academic scientists' involvement in collaborative research, contract research, consulting and informal relationships for university-industry knowledge transfer, which they refer as academic engagement.
Abstract: A considerable body of work highlights the relevance of collaborative research, contract research, consulting and informal relationships for university-industry knowledge transfer. We present a systematic review of research on academic scientists’ involvement in these activities to which we refer as ‘academic engagement’. Apart from extracting findings that are generalisable across studies, we ask how academic engagement differs from commercialization, defined as intellectual property creation and academic entrepreneurship. We identify the individual, organizational and institutional antecedents and consequences of academic engagement, and then compare these findings with the antecedents and consequences of commercialization. Apart from being more widely practiced, academic engagement is distinct from commercialization in that it is closely aligned with traditional academic research activities, and pursued by academics to access resources supporting their research agendas. We conclude by identifying future research needs, opportunities for methodological improvement and policy interventions. (Published version available via open access)
1,589 citations
••
University of Cologne1, Max Planck Society2, University of Bonn3, Ghent University4, Broad Institute5, Stanford University6, Technical University of Dortmund7, Columbia University8, University of Melbourne9, St. Vincent's Health System10, University of Jena11, Casa Sollievo della Sofferenza12, University of Groningen13, VU University Amsterdam14, University of Bologna15, University of Liverpool16, University of Oslo17, University of Zurich18, Peter MacCallum Cancer Centre19, Institut Gustave Roussy20, University of Grenoble21, Vanderbilt University22, Harvard University23, University of Washington24, University of Strasbourg25
TL;DR: This study implicates histone modification as a major feature of SCLC, reveals potentially therapeutically tractable genomic alterations and provides a generalizable framework for the identification of biologically relevant genes in the context of high mutational background.
Abstract: Small-cell lung cancer (SCLC) is an aggressive lung tumor subtype with poor prognosis(1-3). We sequenced 29 SCLC exomes, 2 genomes and 15 transcriptomes and found an extremely high mutation rate of 7.4 +/- 1 protein-changing mutations per million base pairs. Therefore, we conducted integrated analyses of the various data sets to identify pathogenetically relevant mutated genes. In all cases, we found evidence for inactivation of TP53 and RB1 and identified recurrent mutations in the CREBBP, EP300 and MLL genes that encode histone modifiers. Furthermore, we observed mutations in PTEN, SLIT2 and EPHA7, as well as focal amplifications of the FGFR1 tyrosine kinase gene. Finally, we detected many of the alterations found in humans in SCLC tumors from Tp53 and Rb1 double knockout mice(4). Our study implicates histone modification as a major feature of SCLC, reveals potentially therapeutically tractable genomic alterations and provides a generalizable framework for the identification of biologically relevant genes in the context of high mutational background.
1,177 citations
••
Eindhoven University of Technology1, Queensland University of Technology2, Capgemini3, University of Rome Tor Vergata4, Humboldt University of Berlin5, Software AG6, University of Padua7, Polytechnic University of Catalonia8, Hewlett-Packard9, Ghent University10, New Mexico State University11, IBM12, University of Milan13, University of Tartu14, University of Vienna15, Technical University of Lisbon16, Telecom SudParis17, Rabobank18, Infosys19, University of Calabria20, Fujitsu21, Pennsylvania State University22, University of Bari23, University of Bologna24, Vienna University of Economics and Business25, Free University of Bozen-Bolzano26, Stevens Institute of Technology27, Indian Council of Agricultural Research28, Pontifical Catholic University of Chile29, University of Haifa30, Ulsan National Institute of Science and Technology31, Cranfield University32, Katholieke Universiteit Leuven33, Deloitte34, Tsinghua University35, University of Innsbruck36, Hasso Plattner Institute37
TL;DR: This manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users to increase the maturity of process mining as a new tool to improve the design, control, and support of operational business processes.
Abstract: Process mining techniques are able to extract knowledge from event logs commonly available in today’s information systems. These techniques provide new means to discover, monitor, and improve processes in a variety of application domains. There are two main drivers for the growing interest in process mining. On the one hand, more and more events are being recorded, thus, providing detailed information about the history of processes. On the other hand, there is a need to improve and support business processes in competitive and rapidly changing environments. This manifesto is created by the IEEE Task Force on Process Mining and aims to promote the topic of process mining. Moreover, by defining a set of guiding principles and listing important challenges, this manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users. The goal is to increase the maturity of process mining as a new tool to improve the (re)design, control, and support of operational business processes.
1,135 citations
••
TL;DR: This review will summarize the current knowledge about the three main forms of gluten reactions: allergic (wheat allergy), autoimmune (celiac disease, dermatitis herpetiformis and gluten ataxia) and possibly immune-mediated (gluten sensitivity), and also outline pathogenic, clinical and epidemiological differences and propose new nomenclature and classifications.
Abstract: A decade ago celiac disease was considered extremely rare outside Europe and, therefore, was almost completely ignored by health care professionals. In only 10 years, key milestones have moved celiac disease from obscurity into the popular spotlight worldwide. Now we are observing another interesting phenomenon that is generating great confusion among health care professionals. The number of individuals embracing a gluten-free diet (GFD) appears much higher than the projected number of celiac disease patients, fueling a global market of gluten-free products approaching $2.5 billion (US) in global sales in 2010. This trend is supported by the notion that, along with celiac disease, other conditions related to the ingestion of gluten have emerged as health care concerns. This review will summarize our current knowledge about the three main forms of gluten reactions: allergic (wheat allergy), autoimmune (celiac disease, dermatitis herpetiformis and gluten ataxia) and possibly immune-mediated (gluten sensitivity), and also outline pathogenic, clinical and epidemiological differences and propose new nomenclature and classifications.
965 citations
••
TL;DR: This review of the psychometric properties and validity of CT measures as well as individual, environmental and genetic factors that influence the circadian typology provides a state of the art discussion to allow professionals to integrate chronobiological aspects of human behavior into their daily practice.
Abstract: The interest in the systematic study of the circadian typology (CT) is relatively recent and has developed rapidly in the two last decades. All the existing data suggest that this individual difference affects our biological and psychological functioning, not only in health, but also in disease. In the present study, we review the current literature concerning the psychometric properties and validity of CT measures as well as individual, environmental and genetic factors that influence the CT. We present a brief overview of the biological markers that are used to define differences between CT groups (sleep-wake cycle, body temperature, cortisol and melatonin), and we assess the implications for CT and adjustment to shiftwork and jet lag. We also review the differences between CT in terms of cognitive abilities, personality traits and the incidence of psychiatric disorders. When necessary, we have emphasized the methodological limitations that exist today and suggested some future avenues of work in order to overcome these. This is a new field of interest to professionals in many different areas (research, labor, academic and clinical), and this review provides a state of the art discussion to allow professionals to integrate chronobiological aspects of human behavior into their daily practice.
936 citations
••
TL;DR: Advances in scientific knowledge on structural molecules, proteins, teichoic acids, and the most recently described extracellular DNA, on the synthesis and genetics of staphylococcal biofilms, and on the complex network of signal factors that intervene in their control are presented are presented, also reporting on the emerging strategies to disrupt or inhibit them.
••
01 Jan 2012TL;DR: This book provides a comprehensive treatment of assignment problems from their conceptual beginnings in the 1920s through present-day theoretical, algorithmic, and practical developments and can serve as a text for advanced courses in discrete mathematics, integer programming, combinatorial optimization, and algorithmic computer science.
Abstract: This book provides a comprehensive treatment of assignment problems from their conceptual beginnings in the 1920s through present-day theoretical, algorithmic, and practical developments. The authors have organized the book into 10 self-contained chapters to make it easy for readers to use the specific chapters of interest to them without having to read the book linearly. The topics covered include bipartite matching algorithms, linear assignment problems, quadratic assignment problems, multi-index assignment problems, and many variations of these problems. Exercises in the form of numerical examples provide readers with a method of self-study or students with homework problems, and an associated webpage offers applets that readers can use to execute some of the basic algorithms as well as links to computer codes that are available online. Audience: Assignment Problems is a useful tool for researchers, practitioners, and graduate students. Researchers will benefit from the detailed exposition of theory and algorithms related to assignment problems, including the basic linear sum assignment problem and its many variations. Practitioners will learn about practical applications of the methods, the performance of exact and heuristic algorithms, and software options. This book also can serve as a text for advanced courses in discrete mathematics, integer programming, combinatorial optimization, and algorithmic computer science. Contents: Preface; Chapter 1: Introduction; Chapter 2: Theoretical Foundations; Chapter 3: Bipartite Matching Algorithms; Chapter 4: Linear Sum Assignment Problem; Chapter 5: Further Results on the Linear Sum Assignment Problem; Chapter 6: Other Types of Linear Assignment Problems; Chapter 7: Quadratic Assignment Problems: Formulations and Bounds; Chapter 8: Quadratic Assignment Problems: Algorithms; Chapter 9: Other Types of Quadratic Assignment Problems; Chapter 10: Multi-index Assignment Problems; Bibliography; Author Index; Subject Index
••
TL;DR: To improve survival, combined treatment with 2 or more drugs with in vitro activity against the isolate, especially those also including a carbapenem, may be more effective than active monotherapy.
Abstract: Background The spread of Klebsiella pneumoniae (Kp) strains that produce K. pneumoniae carbapenemases (KPCs) has become a significant problem, and treatment of infections caused by these pathogens is a major challenge for clinicians. Methods In this multicenter retrospective cohort study, conducted in 3 large Italian teaching hospitals, we examined 125 patients with bloodstream infections (BSIs) caused by KPC-producing Kp isolates (KPC-Kp) diagnosed between 1 January 2010 and 30 June 2011. The outcome measured was death within 30 days of the first positive blood culture. Survivor and nonsurvivor subgroups were compared to identify predictors of mortality. Results The overall 30-day mortality rate was 41.6%. A significantly higher rate was observed among patients treated with monotherapy (54.3% vs 34.1% in those who received combined drug therapy; P = .02). In logistic regression analysis, 30-day mortality was independently associated with septic shock at BSI onset (odds ratio [OR]: 7.17; 95% confidence interval [CI]: 1.65-31.03; P = .008); inadequate initial antimicrobial therapy (OR: 4.17; 95% CI: 1.61-10.76; P = .003); and high APACHE III scores (OR: 1.04; 95% CI: 1.02-1.07; P Conclusions KPC-Kp BSIs are associated with high mortality. To improve survival, combined treatment with 2 or more drugs with in vitro activity against the isolate, especially those also including a carbapenem, may be more effective than active monotherapy.
••
29 Mar 2012
TL;DR: In this article, the authors reported results from searches for the standard model Higgs boson in proton-proton collisions at square root(s) = 7 TeV in five decay modes: gamma pair, b-quark pair, tau lepton pair, W pair, and Z pair.
Abstract: Combined results are reported from searches for the standard model Higgs boson in proton-proton collisions at sqrt(s)=7 TeV in five Higgs boson decay modes: gamma pair, b-quark pair, tau lepton pair, W pair, and Z pair. The explored Higgs boson mass range is 110-600 GeV. The analysed data correspond to an integrated luminosity of 4.6-4.8 inverse femtobarns. The expected excluded mass range in the absence of the standard model Higgs boson is 118-543 GeV at 95% CL. The observed results exclude the standard model Higgs boson in the mass range 127-600 GeV at 95% CL, and in the mass range 129-525 GeV at 99% CL. An excess of events above the expected standard model background is observed at the low end of the explored mass range making the observed limits weaker than expected in the absence of a signal. The largest excess, with a local significance of 3.1 sigma, is observed for a Higgs boson mass hypothesis of 124 GeV. The global significance of observing an excess with a local significance greater than 3.1 sigma anywhere in the search range 110-600 (110-145) GeV is estimated to be 1.5 sigma (2.1 sigma). More data are required to ascertain the origin of this excess.
••
University of Bologna1, University of Barcelona2, INAF3, University of Edinburgh4, University of Toulouse5, Centre national de la recherche scientifique6, European Southern Observatory7, California Institute of Technology8, ETH Zurich9, Max Planck Society10, Paris Diderot University11, University of Vienna12, Spanish National Research Council13, University of Insubria14, Institute for the Physics and Mathematics of the Universe15, University of Nottingham16, National Taiwan Normal University17, Johns Hopkins University18, Space Telescope Science Institute19, Institut d'Astrophysique de Paris20, University of California, Santa Cruz21, University of California, Davis22, Lawrence Livermore National Laboratory23
TL;DR: In this paper, the authors presented new improved constraints on the Hubble parameter H(z) in the redshift range 0.15 -1.1, obtained from the differential spectroscopic evolution of early-type galaxies as a function of redshift.
Abstract: We present new improved constraints on the Hubble parameter H(z) in the redshift range 0.15 \textless z \textless 1.1, obtained from the differential spectroscopic evolution of early-type galaxies as a function of redshift. We extract a large sample of early-type galaxies ( 11000) from several spectroscopic surveys, spanning almost 8 billion years of cosmic lookback time (0.15 \textless z \textless 1.42). We select the most massive, red elliptical galaxies, passively evolving and without signature of ongoing star formation. Those galaxies can be used as standard cosmic chronometers, as firstly proposed by Jimenez & Loeb (2002), whose (life! Nit age evolution as a function of cosmic time directly probes H (z). We analyze the 4000 angstrom break (D4000) as a function of redshift, use stellar population synthesis models to theoretically calibrate the dependence of the differential age evolution on the differential D4000, and estimate the Hubble parameter taking into account both statistical and systematical errors. We provide 8 new measurements of H(z) (see table 4), and determine its change in H(z) to a precision of 5-12% mapping homogeneously the redshift range up to z 1.1; for the first time, we place a constraint on 11(z) at z not equal 0 with a precision comparable with the one achieved for the Hubble constant (about 5-6% at z similar to 0.2), and covered a redshift range (0.5 \textless z \textless 0.8) which is crucial to distinguish many different quintessence cosmologies. These measurements have been tested to best match a ACDM model, clearly providing a statistically robust indication that the Universe is undergoing an accelerated expansion. This method shows the potentiality to open a new avenue in constrain a variety of alternative cosmologies, especially when future surveys (e.g. Euclid) will open the possibility to extend it up to z similar to 2.
••
TL;DR: The SHARP trial as discussed by the authors showed that sorafenib consistently improved median OS and DCR compared with placebo in patients with advanced HCC, irrespective of disease etiology, baseline tumor burden, performance status, tumor stage and prior therapy.
••
University of Turin1, Masaryk University2, University of Münster3, Sapienza University of Rome4, Monash University5, Medical University of Vienna6, Medical University of Warsaw7, University of Tübingen8, Casa Sollievo della Sofferenza9, University of Bologna10, Medical University of Białystok11, Ankara University12, Charles University in Prague13, Dresden University of Technology14, University of Ulm15, Celgene16, National and Kapodistrian University of Athens17
TL;DR: MPR-R significantly prolonged progression-free survival in patients with newly diagnosed multiple myeloma who were ineligible for transplantation, with the greatest benefit observed in patients 65 to 75 years of age.
Abstract: The median follow-up period was 30 months. The median progression-free survival was significantly longer with MPR-R (31 months) than with MPR (14 months; hazard ratio, 0.49; P<0.001) or MP (13 months; hazard ratio, 0.40; P <0.001). Response rates were superior with MPR-R and MPR (77% and 68%, respectively, vs. 50% with MP; P<0.001 and P = 0.002, respectively, for the comparison with MP). The progression-free survival benefit associated with MPR-R was noted in patients 65 to 75 years of age but not in those older than 75 years of age (P = 0.001 for treatment-by-age interaction). After induction therapy, a landmark analysis showed a 66% reduction in the rate of progression with MPR-R (hazard ratio for the comparison with MPR, 0.34; P<0.001) that was age-independent. During induction therapy, the most frequent adverse events were hematologic; grade 4 neutropenia was reported in 35%, 32%, and 8% of the patients in the MPR-R, MPR, and MP groups, respectively. The 3-year rate of second primary tumors was 7% with MPR-R, 7% with MPR, and 3% with MP. Conclusions MPR-R significantly prolonged progression-free survival in patients with newly di agnosed multiple myeloma who were ineligible for transplantation, with the great est benefit observed in patients 65 to 75 years of age. (Funded by Celgene; MM-015 ClinicalTrials.gov number, NCT00405756.)
••
TL;DR: A combined search for the Standard Model Higgs boson with the ATLAS experiment at the LHC using datasets corresponding to integrated luminosities from 1.04 fb(-1) to 4.9 fb(1) of pp collisions is described in this paper.
••
TL;DR: In this article, the performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at the LHC in 2010.
Abstract: The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta)<2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.
••
TL;DR: This review aims to cover the literature on asymmetric quaternary carbons, 2-substituted and 2,2'-disubtituted 3-indolinones, from its origin to the end of 2011.
Abstract: In recent years, organocatalysis has enhanced its importance as a tool for the synthesis of enantiomerically enriched compounds. Among the candidates for organocatalysis, the construction of asymmetric quaternary carbons is regarded as a challenging problem in organic synthesis. In particular, 3,3′-disubstituted oxindoles have one or more asymmetric quaternary carbon atoms and they represent a large family of bioactive compounds and synthetic derivatives that mimicry natural products. Therefore they are good targets for drug candidates and in the last two years many papers have appeared on organocatalytic methods for the synthesis of 3,3′-disubstituted oxindoles. Moreover, in the last few years 2-substituted and 2,2′-disubtituted 3-indolinones have also attracted the interest of chemists. This review aims to cover the literature on these topics from its origin to the end of 2011.
••
University of Texas MD Anderson Cancer Center1, University of California, San Francisco2, University of Bologna3, Bombay Hospital, Indore4, Hospital General de México5, Catholic University of Korea6, University of Rostock7, Medical University of Łódź8, University Medical Center Groningen9, Bristol-Myers Squibb10
TL;DR: Primary data showed superior efficacy for dasatinib compared with imatinib after 12 months, including significantly higher rates of complete cytogenetic response (CCyR), confirmed CCyR (primary end point), and major molecular response (MMR).
••
TL;DR: In this article, detailed measurements of the electron performance of the ATLAS detector at the LHC were reported, using decays of the Z, W and J/psi particles.
Abstract: Detailed measurements of the electron performance of the ATLAS detector at the LHC are reported, using decays of the Z, W and J/psi particles. Data collected in 2010 at root s = 7 TeV are used, corresponding to an integrated luminosity of almost 40 pb(-1). The inter-alignment of the inner detector and the electromagnetic calorimeter, the determination of the electron energy scale and resolution, and the performance in terms of response uniformity and linearity are discussed. The electron identification, reconstruction and trigger efficiencies, as well as the charge misidentification probability, are also presented.
••
TL;DR: Baranov et al. as mentioned in this paper proposed a method for quantum Optics and Quantum Information of the Austrian Academy of Sciences (A-6020 Innsbruck, Austria).
Abstract: M. A. Baranov,†,‡,§ M. Dalmonte,†,⊥ G. Pupillo,†,‡,∇ and P. Zoller*,†,‡ †Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria ‡Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria RRC “Kurchatov Institute”, Kurchatov Square 1, 123182, Moscow, Russia Dipartimento di Fisica dell’Universita di Bologna, via Irnerio 46, 40126 Bologna, Italy ISIS (UMR 7006) and IPCMS (UMR 7504), Universite de Strasbourg and CNRS, Strasbourg, France
••
TL;DR: This review provides basic principles and a broad set of references useful for the management of phenotyping practices for the study and genetic dissection of drought tolerance and, ultimately, for the release of drought-tolerant cultivars.
Abstract: Improving crops yield under water-limited conditions is the most daunting challenge faced by breeders. To this end, accurate, relevant phenotyping plays an increasingly pivotal role for the selection of drought-resilient genotypes and, more in general, for a meaningful dissection of the quantitative genetic landscape that underscores the adaptive response of crops to drought. A major and universally recognized obstacle to a more effective translation of the results produced by drought-related studies into improved cultivars is the difficulty in properly phenotyping in a high-throughput fashion in order to identify the quantitative trait loci that govern yield and related traits across different water regimes. This review provides basic principles and a broad set of references useful for the management of phenotyping practices for the study and genetic dissection of drought tolerance and, ultimately, for the release of drought-tolerant cultivars.
••
TL;DR: The transverse momentum spectra of charged particles have been measured in pp and PbPb collisions at 2.76 TeV by the CMS experiment at the LHC as mentioned in this paper.
Abstract: The transverse momentum spectra of charged particles have been measured in pp and PbPb collisions at sqrt(sNN) = 2.76 TeV by the CMS experiment at the LHC. In the transverse momentum range pt = 5-10 GeV/c, the charged particle yield in the most central PbPb collisions is suppressed by up to a factor of 5 compared to the pp yield scaled by the number of incoherent nucleon-nucleon collisions. At higher pt, this suppression is significantly reduced, approaching roughly a factor of 2 for particles with pt in the range pt=40-100 GeV/c.
••
Hannover Medical School1, Boston Children's Hospital2, Royal Hospital for Sick Children3, Medical University of Vienna4, Kyoto University5, University Medical Center Groningen6, University of Giessen7, Aarhus University Hospital8, Children's Hospital of Eastern Ontario9, St. Marianna University School of Medicine10, Goethe University Frankfurt11, University of Paris12, Charles University in Prague13, University of Washington14, University of Bologna15, St. Jude Children's Research Hospital16, VU University Medical Center17
TL;DR: In this article, the authors discuss differences between childhood and adult acute myeloid leukemia (AML) and highlight recommendations that are specific to children, as well as the particular relevance of new diagnostic and prognostic molecular markers in pediatric AML.
••
TL;DR: Measurement of spleen stiffness by transient elastography can be used for noninvasive assessment and monitoring of PH and to detect EV in patients with hepatitis C virus-induced cirrhosis.
••
TL;DR: In this paper, a Fourier analysis of the charged particle pair distribution in relative azimuthal angle (Delta phi = phi(a)-phi(b)) is performed to extract the coefficients v(n,n) =.
Abstract: Differential measurements of charged particle azimuthal anisotropy are presented for lead-lead collisions at root sNN = 2.76 TeV with the ATLAS detector at the LHC, based on an integrated luminosity of approximately 8 mu b(-1). This anisotropy is characterized via a Fourier expansion of the distribution of charged particles in azimuthal angle relative to the reaction plane, with the coefficients v(n) denoting the magnitude of the anisotropy. Significant v(2)-v(6) values are obtained as a function of transverse momentum (0.5 = 3 are found to vary weakly with both eta and centrality, and their p(T) dependencies are found to follow an approximate scaling relation, v(n)(1/n)(p(T)) proportional to v(2)(1/2)(p(T)), except in the top 5% most central collisions. A Fourier analysis of the charged particle pair distribution in relative azimuthal angle (Delta phi = phi(a)-phi(b)) is performed to extract the coefficients v(n,n) = . For pairs of charged particles with a large pseudorapidity gap (|Delta eta = eta(a) - eta(b)| > 2) and one particle with p(T) < 3 GeV, the v(2,2)-v(6,6) values are found to factorize as v(n,n)(p(T)(a), p(T)(b)) approximate to v(n) (p(T)(a))v(n)(p(T)(b)) in central and midcentral events. Such factorization suggests that these values of v(2,2)-v(6,6) are primarily attributable to the response of the created matter to the fluctuations in the geometry of the initial state. A detailed study shows that the v(1,1)(p(T)(a), p(T)(b)) data are consistent with the combined contributions from a rapidity-even v(1) and global momentum conservation. A two-component fit is used to extract the v(1) contribution. The extracted v(1) isobserved to cross zero at pT approximate to 1.0 GeV, reaches a maximum at 4-5 GeV with a value comparable to that for v(3), and decreases at higher p(T).
••
University of Lorraine1, Seoul National University2, Hammersmith Hospital3, Kindai University4, University of Copenhagen5, University of Bologna6, University of Calgary7, Northeast Ohio Medical University8, University of São Paulo9, Jaslok Hospital10, Peking Union Medical College11, Ludwig Maximilian University of Munich12, University of Paris13, Fudan University14, Thomas Jefferson University15, University of Michigan16, University of Melbourne17, Institut Gustave Roussy18, Imperial College London19, University of California, San Diego20, Tokyo Medical University21, Tongji University22
TL;DR: These liver CEUS guidelines and recommendations are intended to create standard protocols for the use and administration of UCA in liver applications on an international basis and improve the management of patients worldwide.
Abstract: Initially, a set of guidelines for the use of ultrasound contrast agents was published in 2004 dealing only with liver applications. A second edition of the guidelines in 2008 reflected changes in the available contrast agents and updated the guidelines for the liver, as well as implementing some non-liver applications. Time has moved on, and the need for international guidelines on the use of CEUS in the liver has become apparent. The present document describes the third iteration of recommendations for the hepatic use of contrast enhanced ultrasound (CEUS) using contrast specific imaging techniques. This joint WFUMB-EFSUMB initiative has implicated experts from major leading ultrasound societies worldwide. These liver CEUS guidelines are simultaneously published in the official journals of both organizing federations (i.e., Ultrasound in Medicine and Biology for WFUMB and Ultraschall in der Medizin/European Journal of Ultrasound for EFSUMB). These guidelines and recommendations provide general advice on the use of all currently clinically available ultrasound contrast agents (UCA). They are intended to create standard protocols for the use and administration of UCA in liver applications on an international basis and improve the management of patients worldwide.
••
TL;DR: The present results support the idea that a large, shared real-world fall database could, potentially, provide an enhanced understanding of the fall process and the information needed to design and evaluate a high-performance fall detector.
Abstract: Despite extensive preventive efforts, falls continue to be a major source of morbidity and mortality among elders Real-time detection of falls and their urgent communication to a telecare center may enable rapid medical assistance, thus increasing the sense of security of the elderly and reducing some of the negative consequences of falls Many different approaches have been explored to automatically detect a fall using inertial sensors Although previously published algorithms report high sensitivity (SE) and high specificity (SP), they have usually been tested on simulated falls performed by healthy volunteers We recently collected acceleration data during a number of real-world falls among a patient population with a high-fall-risk as part of the SensAction-AAL European project The aim of the present study is to bechmark the performance of thirteen published fall-detection algorithms when they are applied to the database of 29 real-world fall To the best of our knowledge, this is the first systematic comparison of fall detection algorithms tested on real-world falls We found that the SP average of the thirteen algorithms, was (mean +/- std) 830%+/- 303% (maximum value = 98%) The SE was considerably lower (SE = 570%+/- 273%, maximum value = 828%), much lower than the values obtained on simulated falls The number of false alarms generated by the algorithms during 1-day monitoring of there representative fallers ranged from 3 to 85 The factors that affect the performance of the published algorithms, when they are applied to the real-world falls, are also discussed These findings indicate the importance of testing fall-detection algorithms in real-life conditions in order to produce more effective automated alarm systems with higher acceptance Further, the present results support the idea that a large, shared real-world fall database could, potentially, provide an enhanced understanding of the fall process and the information needed to design and evaluate a high-performance fall detector