scispace - formally typeset
Search or ask a question

Showing papers by "University of Milano-Bicocca published in 2018"


Journal ArticleDOI
Gregory A. Roth1, Gregory A. Roth2, Degu Abate3, Kalkidan Hassen Abate4  +1025 moreInstitutions (333)
TL;DR: Non-communicable diseases comprised the greatest fraction of deaths, contributing to 73·4% (95% uncertainty interval [UI] 72·5–74·1) of total deaths in 2017, while communicable, maternal, neonatal, and nutritional causes accounted for 18·6% (17·9–19·6), and injuries 8·0% (7·7–8·2).

5,211 citations


Journal ArticleDOI
TL;DR: In this paper, the authors assess the burden of 29 cancer groups over time to provide a framework for policy discussion, resource allocation, and research focus, and evaluate cancer incidence, mortality, years lived with disability, years of life lost, and disability-adjusted life-years (DALYs) for 195 countries and territories by age and sex using the Global Burden of Disease study estimation methods.
Abstract: Importance The increasing burden due to cancer and other noncommunicable diseases poses a threat to human development, which has resulted in global political commitments reflected in the Sustainable Development Goals as well as the World Health Organization (WHO) Global Action Plan on Non-Communicable Diseases. To determine if these commitments have resulted in improved cancer control, quantitative assessments of the cancer burden are required. Objective To assess the burden for 29 cancer groups over time to provide a framework for policy discussion, resource allocation, and research focus. Evidence Review Cancer incidence, mortality, years lived with disability, years of life lost, and disability-adjusted life-years (DALYs) were evaluated for 195 countries and territories by age and sex using the Global Burden of Disease study estimation methods. Levels and trends were analyzed over time, as well as by the Sociodemographic Index (SDI). Changes in incident cases were categorized by changes due to epidemiological vs demographic transition. Findings In 2016, there were 17.2 million cancer cases worldwide and 8.9 million deaths. Cancer cases increased by 28% between 2006 and 2016. The smallest increase was seen in high SDI countries. Globally, population aging contributed 17%; population growth, 12%; and changes in age-specific rates, −1% to this change. The most common incident cancer globally for men was prostate cancer (1.4 million cases). The leading cause of cancer deaths and DALYs was tracheal, bronchus, and lung cancer (1.2 million deaths and 25.4 million DALYs). For women, the most common incident cancer and the leading cause of cancer deaths and DALYs was breast cancer (1.7 million incident cases, 535 000 deaths, and 14.9 million DALYs). In 2016, cancer caused 213.2 million DALYs globally for both sexes combined. Between 2006 and 2016, the average annual age-standardized incidence rates for all cancers combined increased in 130 of 195 countries or territories, and the average annual age-standardized death rates decreased within that timeframe in 143 of 195 countries or territories. Conclusions and Relevance Large disparities exist between countries in cancer incidence, deaths, and associated disability. Scaling up cancer prevention and ensuring universal access to cancer care are required for health equity and to fulfill the global commitments for noncommunicable disease and cancer control.

4,621 citations


Journal ArticleDOI
Jeffrey D. Stanaway1, Ashkan Afshin1, Emmanuela Gakidou1, Stephen S Lim1  +1050 moreInstitutions (346)
TL;DR: This study estimated levels and trends in exposure, attributable deaths, and attributable disability-adjusted life-years (DALYs) by age group, sex, year, and location for 84 behavioural, environmental and occupational, and metabolic risks or groups of risks from 1990 to 2017 and explored the relationship between development and risk exposure.

2,910 citations


Journal ArticleDOI
22 Jun 2018-Science
TL;DR: It is demonstrated that, in the general population, the personality trait neuroticism is significantly correlated with almost every psychiatric disorder and migraine, and it is shown that both psychiatric and neurological disorders have robust correlations with cognitive and personality measures.
Abstract: Disorders of the brain can exhibit considerable epidemiological comorbidity and often share symptoms, provoking debate about their etiologic overlap. We quantified the genetic sharing of 25 brain disorders from genome-wide association studies of 265,218 patients and 784,643 control participants and assessed their relationship to 17 phenotypes from 1,191,588 individuals. Psychiatric disorders share common variant risk, whereas neurological disorders appear more distinct from one another and from the psychiatric disorders. We also identified significant sharing between disorders and a number of brain phenotypes, including cognitive measures. Further, we conducted simulations to explore how statistical power, diagnostic misclassification, and phenotypic heterogeneity affect genetic correlations. These results highlight the importance of common genetic variation as a risk factor for brain disorders and the value of heritability-based methods in understanding their etiology.

1,357 citations


Journal ArticleDOI
TL;DR: The findings show substantial progress in the reduction of lower respiratory infection burden, but this progress has not been equal across locations, has been driven by decreases in several primary risk factors, and might require more effort among elderly adults.
Abstract: Summary Background Lower respiratory infections are a leading cause of morbidity and mortality around the world The Global Burden of Diseases, Injuries, and Risk Factors (GBD) Study 2016, provides an up-to-date analysis of the burden of lower respiratory infections in 195 countries This study assesses cases, deaths, and aetiologies spanning the past 26 years and shows how the burden of lower respiratory infection has changed in people of all ages Methods We used three separate modelling strategies for lower respiratory infections in GBD 2016: a Bayesian hierarchical ensemble modelling platform (Cause of Death Ensemble model), which uses vital registration, verbal autopsy data, and surveillance system data to predict mortality due to lower respiratory infections; a compartmental meta-regression tool (DisMod-MR), which uses scientific literature, population representative surveys, and health-care data to predict incidence, prevalence, and mortality; and modelling of counterfactual estimates of the population attributable fraction of lower respiratory infection episodes due to Streptococcus pneumoniae, Haemophilus influenzae type b, influenza, and respiratory syncytial virus We calculated each modelled estimate for each age, sex, year, and location We modelled the exposure level in a population for a given risk factor using DisMod-MR and a spatio-temporal Gaussian process regression, and assessed the effectiveness of targeted interventions for each risk factor in children younger than 5 years We also did a decomposition analysis of the change in LRI deaths from 2000–16 using the risk factors associated with LRI in GBD 2016 Findings In 2016, lower respiratory infections caused 652 572 deaths (95% uncertainty interval [UI] 586 475–720 612) in children younger than 5 years (under-5s), 1 080 958 deaths (943 749–1 170 638) in adults older than 70 years, and 2 377 697 deaths (2 145 584–2 512 809) in people of all ages, worldwide Streptococcus pneumoniae was the leading cause of lower respiratory infection morbidity and mortality globally, contributing to more deaths than all other aetiologies combined in 2016 (1 189 937 deaths, 95% UI 690 445–1 770 660) Childhood wasting remains the leading risk factor for lower respiratory infection mortality among children younger than 5 years, responsible for 61·4% of lower respiratory infection deaths in 2016 (95% UI 45·7–69·6) Interventions to improve wasting, household air pollution, ambient particulate matter pollution, and expanded antibiotic use could avert one under-5 death due to lower respiratory infection for every 4000 children treated in the countries with the highest lower respiratory infection burden Interpretation Our findings show substantial progress in the reduction of lower respiratory infection burden, but this progress has not been equal across locations, has been driven by decreases in several primary risk factors, and might require more effort among elderly adults By highlighting regions and populations with the highest burden, and the risk factors that could have the greatest effect, funders, policy makers, and programme implementers can more effectively reduce lower respiratory infections among the world's most susceptible populations Funding Bill & Melinda Gates Foundation

1,147 citations


Journal ArticleDOI
TL;DR: An in-depth analysis of the majority of the deep neural networks (DNNs) proposed in the state of the art for image recognition, with a complete view of what solutions have been explored so far and in which research directions are worth exploring in the future.
Abstract: This paper presents an in-depth analysis of the majority of the deep neural networks (DNNs) proposed in the state of the art for image recognition. For each DNN, multiple performance indices are observed, such as recognition accuracy, model complexity, computational complexity, memory usage, and inference time. The behavior of such performance indices and some combinations of them are analyzed and discussed. To measure the indices, we experiment the use of DNNs on two different computer architectures, a workstation equipped with a NVIDIA Titan X Pascal, and an embedded system based on a NVIDIA Jetson TX1 board. This experimentation allows a direct comparison between DNNs running on machines with very different computational capacities. This paper is useful for researchers to have a complete view of what solutions have been explored so far and in which research directions are worth exploring in the future, and for practitioners to select the DNN architecture(s) that better fit the resource constraints of practical deployments and applications. To complete this work, all the DNNs, as well as the software used for the analysis, are available online.

626 citations


Journal ArticleDOI
Craig E. Aalseth1, Fabio Acerbi2, P. Agnes3, Ivone F. M. Albuquerque4  +297 moreInstitutions (48)
TL;DR: The DarkSide-20k detector as discussed by the authors is a direct WIMP search detector using a two-phase Liquid Argon Time Projection Chamber (LAr TPC) with an active mass of 23 t (20 t).
Abstract: Building on the successful experience in operating the DarkSide-50 detector, the DarkSide Collaboration is going to construct DarkSide-20k, a direct WIMP search detector using a two-phase Liquid Argon Time Projection Chamber (LAr TPC) with an active (fiducial) mass of 23 t (20 t). This paper describes a preliminary design for the experiment, in which the DarkSide-20k LAr TPC is deployed within a shield/veto with a spherical Liquid Scintillator Veto (LSV) inside a cylindrical Water Cherenkov Veto (WCV). This preliminary design provides a baseline for the experiment to achieve its physics goals, while further development work will lead to the final optimization of the detector parameters and an eventual technical design. Operation of DarkSide-50 demonstrated a major reduction in the dominant 39Ar background when using argon extracted from an underground source, before applying pulse shape analysis. Data from DarkSide-50, in combination with MC simulation and analytical modeling, shows that a rejection factor for discrimination between electron and nuclear recoils of $>3 \times 10^{9}$ is achievable. This, along with the use of the veto system and utilizing silicon photomultipliers in the LAr TPC, are the keys to unlocking the path to large LAr TPC detector masses, while maintaining an experiment in which less than $< 0.1$ events (other than $ u$ -induced nuclear recoils) is expected to occur within the WIMP search region during the planned exposure. DarkSide-20k will have ultra-low backgrounds than can be measured in situ, giving sensitivity to WIMP-nucleon cross sections of $1.2 \times 10^{-47}$ cm2 ( $1.1 \times 10^{-46}$ cm2) for WIMPs of 1 TeV/c2 (10 TeV/c2) mass, to be achieved during a 5 yr run producing an exposure of 100 t yr free from any instrumental background.

534 citations


Journal ArticleDOI
01 Feb 2018-Leukemia
TL;DR: This study provides a comprehensive analysis of the MLL recombinome in acute leukemia and demonstrates that the establishment of patient-specific chromosomal fusion sites allows the design of specific PCR primers for minimal residual disease analyses for all patients.
Abstract: Chromosomal rearrangements of the human MLL/KMT2A gene are associated with infant, pediatric, adult and therapy-induced acute leukemias. Here we present the data obtained from 2345 acute leukemia patients. Genomic breakpoints within the MLL gene and the involved translocation partner genes (TPGs) were determined and 11 novel TPGs were identified. Thus, a total of 135 different MLL rearrangements have been identified so far, of which 94 TPGs are now characterized at the molecular level. In all, 35 out of these 94 TPGs occur recurrently, but only 9 specific gene fusions account for more than 90% of all illegitimate recombinations of the MLL gene. We observed an age-dependent breakpoint shift with breakpoints localizing within MLL intron 11 associated with acute lymphoblastic leukemia and younger patients, while breakpoints in MLL intron 9 predominate in AML or older patients. The molecular characterization of MLL breakpoints suggests different etiologies in the different age groups and allows the correlation of functional domains of the MLL gene with clinical outcome. This study provides a comprehensive analysis of the MLL recombinome in acute leukemia and demonstrates that the establishment of patient-specific chromosomal fusion sites allows the design of specific PCR primers for minimal residual disease analyses for all patients.

478 citations


Journal ArticleDOI
TL;DR: A model based on level of AFP, tumor size, and tumor number, to determine risk of death from HCC-related factors after liver transplantation is developed and might be used to select end points and refine selection criteria for liver transplants for patients with HCC.

373 citations



Journal ArticleDOI
TL;DR: Incidence of MM is highly variable among countries but has increased uniformly since 1990, with the largest increase in middle and low-middle SDI countries, and access to effective care is very limited in many countries of low socioeconomic development.
Abstract: Introduction Multiple myeloma (MM) is a plasma cell neoplasm with substantial morbidity and mortality. A comprehensive description of the global burden of MM is needed to help direct health policy, resource allocation, research, and patient care. Objective To describe the burden of MM and the availability of effective therapies for 21 world regions and 195 countries and territories from 1990 to 2016. Design and Setting We report incidence, mortality, and disability-adjusted life-year (DALY) estimates from the Global Burden of Disease 2016 study. Data sources include vital registration system, cancer registry, drug availability, and survey data for stem cell transplant rates. We analyzed the contribution of aging, population growth, and changes in incidence rates to the overall change in incident cases from 1990 to 2016 globally, by sociodemographic index (SDI) and by region. We collected data on approval of lenalidomide and bortezomib worldwide. Main Outcomes and Measures Multiple myeloma mortality; incidence; years lived with disabilities; years of life lost; and DALYs by age, sex, country, and year. Results Worldwide in 2016 there were 138 509 (95% uncertainty interval [UI], 121 000-155 480) incident cases of MM with an age-standardized incidence rate (ASIR) of 2.1 per 100 000 persons (95% UI, 1.8-2.3). Incident cases from 1990 to 2016 increased by 126% globally and by 106% to 192% for all SDI quintiles. The 3 world regions with the highest ASIR of MM were Australasia, North America, and Western Europe. Multiple myeloma caused 2.1 million (95% UI, 1.9-2.3 million) DALYs globally in 2016. Stem cell transplantation is routinely available in higher-income countries but is lacking in sub-Saharan Africa and parts of the Middle East. In 2016, lenalidomide and bortezomib had been approved in 73 and 103 countries, respectively. Conclusions and Relevance Incidence of MM is highly variable among countries but has increased uniformly since 1990, with the largest increase in middle and low-middle SDI countries. Access to effective care is very limited in many countries of low socioeconomic development, particularly in sub-Saharan Africa. Global health policy priorities for MM are to improve diagnostic and treatment capacity in low and middle income countries and to ensure affordability of effective medications for every patient. Research priorities are to elucidate underlying etiological factors explaining the heterogeneity in myeloma incidence.

Journal ArticleDOI
Matteo Agostini, A. M. Bakalyarov1, M. Balata, I. R. Barabanov2, Laura Baudis3, C. Bauer4, E. Bellotti5, S. Belogurov2, S. Belogurov1, Alessandro Bettini6, L. B. Bezrukov2, J. Biernat7, T. Bode8, D. Borowicz9, V.B. Brudanin9, R. Brugnera6, Allen Caldwell4, C. Cattadori5, A. Chernogorov1, T. Comellato8, V. D'Andrea, E. V. Demidova1, N. Di Marco, A. Domula10, E. Doroshkevich2, V. G. Egorov9, R. Falkenstein11, A. M. Gangapshev4, A. M. Gangapshev2, A. Garfagnini6, P. Grabmayr11, V. I. Gurentsov2, K. N. Gusev9, K. N. Gusev8, K. N. Gusev1, J. Hakenmüller4, A. Hegai11, M. Heisel4, S. Hemmer, R. Hiller3, Werner Hofmann4, Mikael Hult, L. V. Inzhechik2, J. Janicskó Csáthy8, Josef Jochum11, M. Junker, V. V. Kazalov2, Y. Kermaïdic4, Th. Kihm4, I. V. Kirpichnikov1, A. Kirsch4, A. Kish3, A. A. Klimenko9, A. A. Klimenko4, R. Kneißl4, K. T. Knöpfle4, O.I. Kochetov9, V. N. Kornoukhov1, V. N. Kornoukhov2, V. V. Kuzminov2, M. Laubenstein, A. Lazzaro8, Manfred Lindner4, Ivano Lippi, A. Lubashevskiy9, Bayarto Lubsandorzhiev2, Guillaume Lutter, C. Macolino, Bela Majorovits4, W. Maneschg4, M. Miloradovic3, R. Mingazheva3, M. Misiaszek7, P. Moseev2, Igor Nemchenok9, K. Panas7, Luciano Pandola, K. Pelczar, L. Pertoldi6, A. Pullia12, C. Ransom3, Stefano Riboldi12, N. Rumyantseva1, N. Rumyantseva9, Cinzia Sada6, F. Salamida13, C. Schmitt11, B. Schneider10, S. Schönert8, A.-K. Schütz11, O. Schulz4, B. Schwingenheuer4, O. Selivanenko2, E. Shevchik9, M. Shirchenko9, Hardy Simgen4, A.A. Smolnikov9, A.A. Smolnikov4, L. Stanco, L. Vanhoefer4, A. A. Vasenko1, A. V. Veresnikova2, K. von Sturm6, V. Wagner4, A. Wegmann4, T. Wester10, C. Wiesinger8, M. M. Wojcik7, E. A. Yanovich2, I. Zhitnikov9, S. V. Zhukov1, D. R. Zinatulina9, A. Zschocke11, Anna Julia Zsigmond4, Kai Zuber10, G. Zuzel7 
TL;DR: The GERDA experiment searches for the lepton-number-violating neutrinoless double-β decay of ^{76}Ge (^{76]Ge→^{76}Se+2e^{-}) operating bare Ge diodes with an enriched ^{ 76}Ge fraction in liquid argon with increased exposure for broad-energy germanium type (BEGe) detectors.
Abstract: The GERDA experiment searches for the lepton-number-violating neutrinoless double-β decay of ^{76}Ge (^{76}Ge→^{76}Se+2e^{-}) operating bare Ge diodes with an enriched ^{76}Ge fraction in liquid argon. The exposure for broad-energy germanium type (BEGe) detectors is increased threefold with respect to our previous data release. The BEGe detectors feature an excellent background suppression from the analysis of the time profile of the detector signals. In the analysis window a background level of 1.0_{-0.4}^{+0.6}×10^{-3} counts/(keV kg yr) has been achieved; if normalized to the energy resolution this is the lowest ever achieved in any 0νββ experiment. No signal is observed and a new 90% C.L. lower limit for the half-life of 8.0×10^{25} yr is placed when combining with our previous data. The expected median sensitivity assuming no signal is 5.8×10^{25} yr.

Journal ArticleDOI
TL;DR: The main advantages and pitfalls of metabarcoding approaches to assess parameters such as richness, abundance, taxonomic composition and species ecological values, to be used for calculation of biotic indices are discussed.

Journal ArticleDOI
TL;DR: In this paper, the performance of the modified system is studied using proton-proton collision data at center-of-mass energy √s=13 TeV, collected at the LHC in 2015 and 2016.
Abstract: The CMS muon detector system, muon reconstruction software, and high-level trigger underwent significant changes in 2013–2014 in preparation for running at higher LHC collision energy and instantaneous luminosity. The performance of the modified system is studied using proton-proton collision data at center-of-mass energy √s=13 TeV, collected at the LHC in 2015 and 2016. The measured performance parameters, including spatial resolution, efficiency, and timing, are found to meet all design specifications and are well reproduced by simulation. Despite the more challenging running conditions, the modified muon system is found to perform as well as, and in many aspects better than, previously. We dedicate this paper to the memory of Prof. Alberto Benvenuti, whose work was fundamental for the CMS muon detector.

Journal ArticleDOI
C. Alduino1, F. Alessandria, K. Alfonso2, E. Andreotti  +180 moreInstitutions (17)
TL;DR: The CUORE experiment, a ton-scale cryogenic bolometer array, recently began operation at the Laboratori Nazionali del Gran Sasso in Italy, and it is applied for the first time to a high-sensitivity search for a lepton-number-violating process: ^{130}Te neutrinoless double-beta decay.
Abstract: The CUORE experiment, a ton-scale cryogenic bolometer array, recently began operation at the Laboratori Nazionali del Gran Sasso in Italy. The array represents a significant advancement in this technology, and in this work we apply it for the first time to a high-sensitivity search for a lepton-number-violating process: ^{130}Te neutrinoless double-beta decay. Examining a total TeO_{2} exposure of 86.3 kg yr, characterized by an effective energy resolution of (7.7±0.5) keV FWHM and a background in the region of interest of (0.014±0.002) counts/(keV kg yr), we find no evidence for neutrinoless double-beta decay. Including systematic uncertainties, we place a lower limit on the decay half-life of T_{1/2}^{0ν}(^{130}Te)>1.3×10^{25} yr (90% C.L.); the median statistical sensitivity of this search is 7.0×10^{24} yr. Combining this result with those of two earlier experiments, Cuoricino and CUORE-0, we find T_{1/2}^{0ν}(^{130}Te)>1.5×10^{25} yr (90% C.L.), which is the most stringent limit to date on this decay. Interpreting this result as a limit on the effective Majorana neutrino mass, we find m_{ββ}<(110-520) meV, where the range reflects the nuclear matrix element estimates employed.

Journal ArticleDOI
TL;DR: The AAMI/ESH/ISO standard as mentioned in this paper was developed by the International Organization for Standardization (ISO) and developed by AAMI, ESH and ISO experts who agreed to develop a universal standard for device validation.
Abstract: Copyright © 2018 Wolters Kluwer Health, Inc., and American Heart Association, Inc. This article has been copublished in Hypertension. In the last 30 years, several organizations, such as the US Association for the Advancement of Medical Instrumentation (AAMI), the British Hypertension Society, the European Society of Hypertension (ESH) Working Group on Blood Pressure (BP) Monitoring and the International Organization for Standardization (ISO) have developed protocols for clinical validation of BP measuring devices. However, it is recognized that science, as well as patients, consumers and manufacturers would be best served if all BP measuring devices were assessed for accuracy according to an agreed single validation protocol that had global acceptance. Therefore, an international initiative was taken by AAMI, ESH and ISO experts who agreed to develop a universal standard for device validation. This statement presents the key aspects of a validation procedure, which were agreed by the AAMI, ESH and ISO representatives as the basis for a single universal validation protocol. As soon as the AAMI/ESH/ISO standard is fully developed, this will be regarded as the single universal standard and will replace all other previous standards/protocols.

Journal ArticleDOI
TL;DR: In this paper, the progenitors of double compact object binaries with their population-synthesis code MOBSE are investigated, and the impact of progenitor's metallicity, of the common-envelope parameter $\alpha{}$ and of the natal kicks on the properties of DNSs, BHNSs and BHBs are investigated.
Abstract: Six gravitational wave events have been reported by the LIGO-Virgo collaboration (LVC), five of them associated with black hole binary (BHB) mergers and one with a double neutron star (DNS) merger, while the coalescence of a black hole-neutron star (BHNS) binary is still missing. We investigate the progenitors of double compact object binaries with our population-synthesis code MOBSE. MOBSE includes advanced prescriptions for mass loss by stellar winds (depending on metallicity and on the Eddington ratio) and a formalism for core-collapse, electron-capture and (pulsational) pair instability supernovae. We investigate the impact of progenitor's metallicity, of the common-envelope parameter $\alpha{}$ and of the natal kicks on the properties of DNSs, BHNSs and BHBs. We find that neutron-star (NS) masses in DNSs span from 1.1 to 2.0 M$_\odot$, with a preference for light NSs, while NSs in merging BHNSs have mostly large masses ($1.3-2.0$ M$_\odot$). BHs in merging BHNSs are preferentially low mass ($5-15$ M$_\odot$). BH masses in merging BHBs strongly depend on the progenitor's metallicity and span from $\sim{}5$ to $\sim{}45$ M$_\odot$. The local merger rate density of both BHNSs and BHBs derived from our simulations is consistent with the values reported by the LVC in all our simulations. In contrast, the local merger rate density of DNSs matches the value inferred from the LVC only if low natal kicks are assumed. This result adds another piece to the intricate puzzle of natal kicks and DNS formation.

Journal ArticleDOI
TL;DR: These practice guidelines on the management of arterial hypertension are a concise summary of the more extensive ones prepared by the Task Force jointly appointed by the European Society of Hypertension and theEuropean Society of Cardiology.
Abstract: These practice guidelines on the management of arterial hypertension are a concise summary of the more extensive ones prepared by the Task Force jointly appointed by the European Society of Hypertension and the European Society of Cardiology. These guidelines have been prepared on the basis of the best available evidence on all issues deserving recommendations; their role must be educational and not prescriptive or coercive for the management of individual subjects who may differ widely in their personal, medical and cultural characteristics. The members of the Task Force have participated independently in the preparation of these guidelines, drawing on their academic and clinical experience and by objective examination and interpretation of all available literature. A disclosure of their potential conflict of interest is reported on the websites of the ESH and the ESC.

Proceedings ArticleDOI
27 May 2018
TL;DR: A new class of approaches, namely program repair techniques, whose key idea is to try to automatically repair software systems by producing an actual fix that can be validated by the testers before it is finally accepted, or that is adapted to properly fit the system.
Abstract: Despite their growing complexity and increasing size, modern software applications must satisfy strict release requirements that impose short bug fixing and maintenance cycles, putting significant pressure on developers who are responsible for timely producing high-quality software. To reduce developers workload, repairing and healing techniques have been extensively investigated as solutions for efficiently repairing and maintaining software in the last few years. In particular, repairing solutions have been able to automatically produce useful fixes for several classes of bugs that might be present in software programs. A range of algorithms, techniques, and heuristics have been integrated, experimented, and studied, producing a heterogeneous and articulated research framework where automatic repair techniques are proliferating. This paper organizes the knowledge in the area by surveying a body of 108 papers about automatic software repair techniques, illustrating the algorithms and the approaches, comparing them on representative examples, and discussing the open challenges and the empirical evidence reported so far.

Journal ArticleDOI
TL;DR: The best proposal, named DeepBIQ, estimates the image quality by average-pooling the scores predicted on multiple subregions of the original image, having a linear correlation coefficient with human subjective scores of almost 0.91.
Abstract: In this work, we investigate the use of deep learning for distortion-generic blind image quality assessment. We report on different design choices, ranging from the use of features extracted from pre-trained convolutional neural networks (CNNs) as a generic image description, to the use of features extracted from a CNN fine-tuned for the image quality task. Our best proposal, named DeepBIQ, estimates the image quality by average-pooling the scores predicted on multiple subregions of the original image. Experimental results on the LIVE In the Wild Image Quality Challenge Database show that DeepBIQ outperforms the state-of-the-art methods compared, having a linear correlation coefficient with human subjective scores of almost 0.91. These results are further confirmed also on four benchmark databases of synthetically distorted images: LIVE, CSIQ, TID2008, and TID2013.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2240 moreInstitutions (157)
TL;DR: In this article, a measurement of the H→ττ signal strength is performed using events recorded in proton-proton collisions by the CMS experiment at the LHC in 2016 at a center-of-mass energy of 13TeV.

Journal ArticleDOI
TL;DR: In this paper, the authors used the population-synthesis code MOBSE to investigate the demography of merging BHs. And they found a much higher number of mergers from metalpoor progenitors than from metal-rich ones: the number of BHB mergers per unit mass is ~10^-4 Msun^-1 at low metallicity (Z = 0.0002 - 0.002) and drops to ~ 10^-7 Msun+1 at high METALITY (Z ~ 0.02).
Abstract: The first four gravitational wave events detected by LIGO were all interpreted as merging black hole binaries (BHBs), opening a new perspective on the study of such systems. Here we use our new population-synthesis code MOBSE, an upgraded version of BSE (Hurley et al. 2002), to investigate the demography of merging BHBs. MOBSE includes metallicity-dependent prescriptions for mass loss of massive hot stars. It also accounts for the impact of the electron-scattering Eddington factor on mass loss. We perform >10^8 simulations of isolated massive binaries, with 12 different metallicities, to study the impact of mass loss, core-collapse supernovae and common envelope on merging BHBs. Accounting for the dependence of stellar winds on the Eddington factor leads to the formation of black holes (BHs) with mass up to 65 Msun at metallicity Z~0.0002. However, most BHs in merging BHBs have masses 0.6 are more likely. We predict that systems like GW150914, GW170814 and GW170104 can form only from progenitors with metallicity Z<=0.006, Z<=0.008 and Z<=0.012, respectively. Most merging BHBs have gone through a common envelope phase, but up to ~17 per cent merging BHBs at low metallicity did not undergo any common envelope phase. We find a much higher number of mergers from metal-poor progenitors than from metal-rich ones: the number of BHB mergers per unit mass is ~10^-4 Msun^-1 at low metallicity (Z = 0.0002 - 0.002) and drops to ~10^-7 Msun^-1 at high metallicity (Z ~ 0.02).

Journal ArticleDOI
Roel Aaij1, Gregory Ciezarek, P. Collins1, Stefan Roiser1  +820 moreInstitutions (51)
TL;DR: In this paper, the τ-lepton decays with three charged pions in the final state were measured using a data sample of proton-proton collisions collected with the LHCb detector at center-of-mass energies of 7 and 8 TeV.
Abstract: The ratio of branching fractions R(D^{*-})≡B(B^{0}→D^{*-}τ^{+}ν_{τ})/B(B^{0}→D^{*-}μ^{+}ν_{μ}) is measured using a data sample of proton-proton collisions collected with the LHCb detector at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3 fb^{-1}. For the first time, R(D^{*-}) is determined using the τ-lepton decays with three charged pions in the final state. The B^{0}→D^{*-}τ^{+}ν_{τ} yield is normalized to that of the B^{0}→D^{*-}π^{+}π^{-}π^{+} mode, providing a measurement of B(B^{0}→D^{*-}τ^{+}ν_{τ})/B(B^{0}→D^{*-}π^{+}π^{-}π^{+})=1.97±0.13±0.18, where the first uncertainty is statistical and the second systematic. The value of B(B^{0}→D^{*-}τ^{+}ν_{τ})=(1.42±0.094±0.129±0.054)% is obtained, where the third uncertainty is due to the limited knowledge of the branching fraction of the normalization mode. Using the well-measured branching fraction of the B^{0}→D^{*-}μ^{+}ν_{μ} decay, a value of R(D^{*-})=0.291±0.019±0.026±0.013 is established, where the third uncertainty is due to the limited knowledge of the branching fractions of the normalization and B^{0}→D^{*-}μ^{+}ν_{μ} modes. This measurement is in agreement with the standard model prediction and with previous results.

Journal ArticleDOI
TL;DR: This study presents the first‐time prospective application of a deep learning model for designing new druglike compounds with desired activities and synthesized five top‐ranking compounds designed by the generative model.
Abstract: Generative artificial intelligence offers a fresh view on molecular design. We present the first-time prospective application of a deep learning model for designing new druglike compounds with desired activities. For this purpose, we trained a recurrent neural network to capture the constitution of a large set of known bioactive compounds represented as SMILES strings. By transfer learning, this general model was fine-tuned on recognizing retinoid X and peroxisome proliferator-activated receptor agonists. We synthesized five top-ranking compounds designed by the generative model. Four of the compounds revealed nanomolar to low-micromolar receptor modulatory activity in cell-based assays. Apparently, the computational model intrinsically captured relevant chemical and biological knowledge without the need for explicit rules. The results of this study advocate generative artificial intelligence for prospective de novo molecular design, and demonstrate the potential of these methods for future medicinal chemistry.

Journal ArticleDOI
TL;DR: These are the first direct limits for N mass above 500 GeV and the first limits obtained at a hadron collider for N masses below 40 Ge V.
Abstract: A search for a heavy neutral lepton N of Majorana nature decaying into a W boson and a charged lepton is performed using the CMS detector at the LHC. The targeted signature consists of three prompt charged leptons in any flavor combination of electrons and muons. The data were collected in proton-proton collisions at a center-of-mass energy of 13 TeV, with an integrated luminosity of 35.9 fb^(−1). The search is performed in the N mass range between 1 GeV and 1.2 TeV. The data are found to be consistent with the expected standard model background. Upper limits are set on the values of |V_(eN)|^2and |V_(μN)|^2, where V_(lN) is the matrix element describing the mixing of N with the standard model neutrino of flavor l. These are the first direct limits for N masses above 500 GeV and the first limits obtained at a hadron collider for N masses below 40 GeV.

Journal ArticleDOI
M. Aguilar, L. Ali Cavasonza1, G. Ambrosi, Luísa Arruda  +250 moreInstitutions (27)
TL;DR: The observation of new properties of secondary cosmic rays Li, Be, and B measured in the rigidity range 1.9 GV to 3.3 TV with a total of 5.4×10^{6} nuclei collected by AMS during the first five years of operation aboard the International Space Station is reported.
Abstract: We report on the observation of new properties of secondary cosmic rays Li, Be, and B measured in the rigidity (momentum per unit charge) range 1.9 GV to 3.3 TV with a total of 5.4×10^{6} nuclei collected by AMS during the first five years of operation aboard the International Space Station. The Li and B fluxes have an identical rigidity dependence above 7 GV and all three fluxes have an identical rigidity dependence above 30 GV with the Li/Be flux ratio of 2.0±0.1. The three fluxes deviate from a single power law above 200 GV in an identical way. This behavior of secondary cosmic rays has also been observed in the AMS measurement of primary cosmic rays He, C, and O but the rigidity dependences of primary cosmic rays and of secondary cosmic rays are distinctly different. In particular, above 200 GV, the secondary cosmic rays harden more than the primary cosmic rays.

Journal ArticleDOI
12 Jan 2018-Sensors
TL;DR: A region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity, which outperforms the state of the art.
Abstract: Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art.

Journal ArticleDOI
B. P. Abbott1, Richard J. Abbott1, T. D. Abbott2, Fausto Acernese3  +1141 moreInstitutions (126)
TL;DR: The total background may be detectable with a signal-to-noise-ratio of 3 after 40 months of total observation time, based on the expected timeline for Advanced LIGO and Virgo to reach their design sensitivity.
Abstract: The LIGO Scientific and Virgo Collaborations have announced the event GW170817, the first detection of gravitational waves from the coalescence of two neutron stars. The merger rate of binary neutron stars estimated from this event suggests that distant, unresolvable binary neutron stars create a significant astrophysical stochastic gravitational-wave background. The binary neutron star component will add to the contribution from binary black holes, increasing the amplitude of the total astrophysical background relative to previous expectations. In the Advanced LIGO-Virgo frequency band most sensitive to stochastic backgrounds (near 25 Hz), we predict a total astrophysical background with amplitude ΩGW(f=25 Hz)=1.8 +2.7 −1.3×10−9 with 90% confidence, compared with ΩGW(f=25 Hz)=1.1 +1.2 −0.7×10−9 from binary black holes alone. Assuming the most probable rate for compact binary mergers, we find that the total background may be detectable with a signal-to-noise-ratio of 3 after 40 months of total observation time, based on the expected timeline for Advanced LIGO and Virgo to reach their design sensitivity.

Journal ArticleDOI
TL;DR: It is concluded that the E. focardii SODs combine cold activity, local molecular flexibility and thermo tolerance, consistent with the definition of cold-adapted enzymes.
Abstract: Oxidative stress is a particularly severe threat to Antarctic marine polar organisms because they are exposed to high dissolved oxygen and to intense UV radiation. This paper reports the features of three superoxide dismutases from the Antarctic psychrophilic ciliate Euplotes focardii that faces two environmental challenges, oxidative stress and low temperature. Two out of these are Cu,Zn superoxide dismutases (named Ef-SOD1a and Ef-SOD1b) and one belongs to the Mn-containing group (Ef-SOD2). Ef-SOD1s and Ef-SOD2 differ in their evolutionary history, expression and overall structural features. Ef-SOD1 genes are expressed at different levels, with Ef-SOD1b mRNA 20-fold higher at the ciliate optimal temperature of growth (4 °C). All Ef-SOD enzymes are active at 4 °C, consistent with the definition of cold-adapted enzymes. At the same time, they display temperatures of melting in the range 50–70 °C and retain residual activity after incubation at 65–75 °C. Supported by data of molecular dynamics simulation, we conclude that the E. focardii SODs combine cold activity, local molecular flexibility and thermo tolerance.

Proceedings ArticleDOI
04 Apr 2018
TL;DR: This novel realistic medical image generation approach shows that GANs can generate 128 χ 128 brain MR images avoiding artifacts, and even an expert physician was unable to accurately distinguish the synthetic images from the real samples in the Visual Turing Test.
Abstract: In medical imaging, it remains a challenging and valuable goal how to generate realistic medical images completely different from the original ones; the obtained synthetic images would improve diagnostic reliability, allowing for data augmentation in computer-assisted diagnosis as well as physician training. In this paper, we focus on generating synthetic multi-sequence brain Magnetic Resonance (MR) images using Generative Adversarial Networks (GANs). This involves difficulties mainly due to low contrast MR images, strong consistency in brain anatomy, and intra-sequence variability. Our novel realistic medical image generation approach shows that GANs can generate 128 χ 128 brain MR images avoiding artifacts. In our preliminary validation, even an expert physician was unable to accurately distinguish the synthetic images from the real samples in the Visual Turing Test.