scispace - formally typeset
Search or ask a question

Showing papers by "fondazione bruno kessler published in 2021"


Journal ArticleDOI
05 Jul 2021-PeerJ
TL;DR: In this paper, the authors compare the performance of R-squared and SMAPE with respect to the distribution of ground truth elements, and show that the coefficient of determination is more informative and truthful than SMAPE, and does not have the interpretability limitations of MSE, RMSE, MAE and MAPE.
Abstract: Regression analysis makes up a large part of supervised machine learning, and consists of the prediction of a continuous independent target from a set of other predictor variables. The difference between binary classification and regression is in the target range: in binary classification, the target can have only two values (usually encoded as 0 and 1), while in regression the target can have multiple values. Even if regression analysis has been employed in a huge number of machine learning studies, no consensus has been reached on a single, unified, standard metric to assess the results of the regression itself. Many studies employ the mean square error (MSE) and its rooted variant (RMSE), or the mean absolute error (MAE) and its percentage variant (MAPE). Although useful, these rates share a common drawback: since their values can range between zero and +infinity, a single value of them does not say much about the performance of the regression with respect to the distribution of the ground truth elements. In this study, we focus on two rates that actually generate a high score only if the majority of the elements of a ground truth group has been correctly predicted: the coefficient of determination (also known as R-squared or R 2) and the symmetric mean absolute percentage error (SMAPE). After showing their mathematical properties, we report a comparison between R 2 and SMAPE in several use cases and in two real medical scenarios. Our results demonstrate that the coefficient of determination (R-squared) is more informative and truthful than SMAPE, and does not have the interpretability limitations of MSE, RMSE, MAE and MAPE. We therefore suggest the usage of R-squared as standard metric to evaluate regression analyses in any scientific domain.

568 citations


Journal ArticleDOI
TL;DR: In this paper, the advantages of the Matthews correlation coefficient (MCC) over accuracy and F1 score have already been shown, and the authors have shown that MCC is a robust metric that summarizes the classifier performance in a single value, if positive and negative cases are of equal importance.
Abstract: Evaluating binary classifications is a pivotal task in statistics and machine learning, because it can influence decisions in multiple areas, including for example prognosis or therapies of patients in critical conditions The scientific community has not agreed on a general-purpose statistical indicator for evaluating two-class confusion matrices (having true positives, true negatives, false positives, and false negatives) yet, even if advantages of the Matthews correlation coefficient (MCC) over accuracy and F1 score have already been shownIn this manuscript, we reaffirm that MCC is a robust metric that summarizes the classifier performance in a single value, if positive and negative cases are of equal importance We compare MCC to other metrics which value positive and negative cases equally: balanced accuracy (BA), bookmaker informedness (BM), and markedness (MK) We explain the mathematical relationships between MCC and these indicators, then show some use cases and a bioinformatics scenario where these metrics disagree and where MCC generates a more informative responseAdditionally, we describe three exceptions where BM can be more appropriate: analyzing classifications where dataset prevalence is unrepresentative, comparing classifiers on different datasets, and assessing the random guessing level of a classifier Except in these cases, we believe that MCC is the most informative among the single metrics discussed, and suggest it as standard measure for scientists of all fields A Matthews correlation coefficient close to +1, in fact, means having high values for all the other confusion matrix metrics The same cannot be said for balanced accuracy, markedness, bookmaker informedness, accuracy and F1 score

241 citations


Journal ArticleDOI
Daniele Paolo Anderle1, V. Bertone2, Xu Cao3, Lei Chang4, Ningbo Chang5, Gu Chen6, Xurong Chen3, Zhuojun Chen7, Zhu-Fang Cui8, Ling-Yun Dai7, Weitian Deng9, Minghui Ding10, Xu Feng11, Chang Gong11, Long-Cheng Gui12, Feng-Kun Guo3, Chengdong Han3, J. J. He13, Tie-Jiun Hou14, Hongxia Huang13, Yin Huang15, Krešimir Kumerički16, L. P. Kaptari3, L. P. Kaptari17, Demin Li18, Hengne Li1, Minxiang Li19, Minxiang Li3, Xue-Qian Li4, Y. T. Liang3, Zuotang Liang20, Chen Liu20, Chuan Liu11, Guoming Liu1, Jie Liu3, Liuming Liu3, X. Liu19, Tiehui Liu20, Xiaofeng Luo21, Zhun Lyu22, Bo-Qiang Ma11, Fu Ma3, Jian-Ping Ma3, Yu-Gang Ma23, Yu-Gang Ma3, Lijun Mao3, C. Mezrag2, Hervé Moutarde2, Jialun Ping13, Si-Xue Qin24, Hang Ren3, Craig D. Roberts8, Juan Rojo25, Guodong Shen3, Chao Shi26, Qintao Song18, Hao Sun27, Paweł Sznajder, Enke Wang1, Fan Wang8, Qian Wang1, Rong Wang3, Ruiru Wang3, Taofeng Wang28, Wei Wang29, Xiaoyu Wang18, Xiaoyun Wang30, Jia-Jun Wu3, Xing-Gang Wu24, Lei Xia31, Bo-Wen Xiao32, Bo-Wen Xiao21, Guoqing Xiao3, Ju Jun Xie3, Ya-Ping Xie3, Hongxi Xing1, Hu-Shan Xu3, Nu Xu21, Nu Xu3, Shu-Sheng Xu33, Mengshi Yan11, Wenbiao Yan31, Wencheng Yan18, Xinhu Yan34, Jiancheng Yang3, Yi Bo Yang3, Zhi Yang35, De-Liang Yao7, Z. Ye36, Pei-Lin Yin33, C.-P. Yuan37, Wenlong Zhan3, Jianhui Zhang38, Jinlong Zhang20, Pengming Zhang39, Yifei Zhang31, Chao Hsi Chang3, Zhenyu Zhang40, Hongwei Zhao3, Kuang Ta Chao11, Qiang Zhao3, Yuxiang Zhao3, Zhengguo Zhao31, Liang Zheng41, Jian Zhou20, Xiang Zhou40, Xiaorong Zhou31, Bing-Song Zou3, Liping Zou3 
TL;DR: In this article, an Electron-ion collider in China (EicC) has been proposed, which will be constructed based on an upgraded heavy-ion accelerator, High Intensity heavy ion Accelerator Facility (HIAF), together with a new electron ring.
Abstract: Lepton scattering is an established ideal tool for studying inner structure of small particles such as nucleons as well as nuclei. As a future high energy nuclear physics project, an Electron-ion collider in China (EicC) has been proposed. It will be constructed based on an upgraded heavy-ion accelerator, High Intensity heavy-ion Accelerator Facility (HIAF) which is currently under construction, together with a new electron ring. The proposed collider will provide highly polarized electrons (with a polarization of ∼80%) and protons (with a polarization of ∼70%) with variable center of mass energies from 15 to 20 GeV and the luminosity of (2–3) × 10$^{33}$ cm$^{−2}$ · s$^{−1}$. Polarized deuterons and Helium-3, as well as unpolarized ion beams from Carbon to Uranium, will be also available at the EicC.The main foci of the EicC will be precision measurements of the structure of the nucleon in the sea quark region, including 3D tomography of nucleon; the partonic structure of nuclei and the parton interaction with the nuclear environment; the exotic states, especially those with heavy flavor quark contents. In addition, issues fundamental to understanding the origin of mass could be addressed by measurements of heavy quarkonia near-threshold production at the EicC. In order to achieve the above-mentioned physics goals, a hermetical detector system will be constructed with cutting-edge technologies.This document is the result of collective contributions and valuable inputs from experts across the globe. The EicC physics program complements the ongoing scientific programs at the Jefferson Laboratory and the future EIC project in the United States. The success of this project will also advance both nuclear and particle physics as well as accelerator and detector technology in China.[graphic not available: see fulltext]

154 citations


Journal ArticleDOI
TL;DR: Deep change vector analysis (DCVA) and fuzzy rules can be applied to identify changed buildings (new/destroyed) in bitemporal SAR images using a cycle-consistent generative adversarial network (CycleGAN).
Abstract: Building change detection (CD), important for its application in urban monitoring, can be performed in near real time by comparing prechange and postchange very-high-spatial-resolution (VHR) synthetic-aperture-radar (SAR) images However, multitemporal VHR SAR images are complex as they show high spatial correlation, prone to shadows, and show an inhomogeneous signature Spatial context needs to be taken into account to effectively detect a change in such images Recently, convolutional-neural-network (CNN)-based transfer learning techniques have shown strong performance for CD in VHR multispectral images However, its direct use for SAR CD is impeded by the absence of labeled SAR data and, thus, pretrained networks To overcome this, we exploit the availability of paired unlabeled SAR and optical images to train for the suboptimal task of transcoding SAR images into optical images using a cycle-consistent generative adversarial network (CycleGAN) The CycleGAN consists of two generator networks: one for transcoding SAR images into the optical image domain and the other for projecting optical images into the SAR image domain After unsupervised training, the generator transcoding SAR images into optical ones is used as a bitemporal deep feature extractor to extract optical-like features from bitemporal SAR images Thus, deep change vector analysis (DCVA) and fuzzy rules can be applied to identify changed buildings (new/destroyed) We validate our method on two data sets made up of pairs of bitemporal VHR SAR images on the city of L’Aquila (Italy) and Trento (Italy)

128 citations


Journal ArticleDOI
01 Mar 2021
TL;DR: In this paper, the authors analyzed quarantined case contacts, identified between February 20 and April 16, 2020, in the Lombardy region of Italy, in order to estimate the risk of developing symptoms and of progressing to critical disease in individuals infected with severe acute respiratory syndrome coronavirus 2.
Abstract: Importance Solid estimates of the risk of developing symptoms and of progressing to critical disease in individuals infected with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are key to interpreting coronavirus disease 2019 (COVID-19) dynamics, identifying the settings and the segments of the population where transmission is more likely to remain undetected, and defining effective control strategies. Objective To estimate the association of age with the likelihood of developing symptoms and the association of age with the likelihood of progressing to critical illness after SARS-CoV-2 infection. Design, Setting, and Participants This cohort study analyzed quarantined case contacts, identified between February 20 and April 16, 2020, in the Lombardy region of Italy. Contacts were monitored daily for symptoms and tested for SARS-CoV-2 infection, by either real-time reverse transcriptase–polymerase chain reaction using nasopharyngeal swabs or retrospectively via IgG serological assays. Close contacts of individuals with laboratory-confirmed COVID-19 were selected as those belonging to clusters (ie, groups of contacts associated with an index case) where all individuals were followed up for symptoms and tested for SARS-CoV-2 infection. Data were analyzed from February to June 2020. Exposure Close contact with individuals with confirmed COVID-19 cases as identified by contact tracing operations. Main Outcomes and Measures Age-specific estimates of the risk of developing respiratory symptoms or fever greater than or equal to 37.5 °C and of experiencing critical disease (defined as requiring intensive care or resulting in death) in SARS-CoV-2–infected case contacts. Results In total, 5484 case contacts (median [interquartile range] age, 50 [30-61] years; 3086 female contacts [56.3%]) were analyzed, 2824 of whom (51.5%) tested positive for SARS-CoV-2 (median [interquartile range] age, 53 [34-64] years; 1604 female contacts [56.8%]). The proportion of infected persons who developed symptoms ranged from 18.1% (95% CI, 13.9%-22.9%) among participants younger than 20 years to 64.6% (95% CI, 56.6%-72.0%) for those aged 80 years or older. Most infected contacts (1948 of 2824 individuals [69.0%]) did not develop respiratory symptoms or fever greater than or equal to 37.5 °C. Only 26.1% (95% CI, 24.1%-28.2%) of infected individuals younger than 60 years developed respiratory symptoms or fever greater than or equal to 37.5 °C; among infected participants older than 60 years, 6.6% (95% CI, 5.1%-8.3%) developed critical disease. Female patients were 52.7% (95% CI, 24.4%-70.7%) less likely than male patients to develop critical disease after SARS-CoV-2 infection. Conclusions and Relevance In this Italian cohort study of close contacts of patients with confirmed SARS-CoV-2 infection, more than one-half of individuals tested positive for the virus. However, most infected individuals did not develop respiratory symptoms or fever. The low proportion of children and young adults who developed symptoms highlights the possible challenges in readily identifying SARS-CoV-2 infections.

109 citations


Journal ArticleDOI
TL;DR: MuST-C, a large and freely available Multilingual Speech Translation Corpus built from English TED Talks, is presented, describing the corpus creation methodology and discussing the outcomes of empirical and manual quality evaluations.

105 citations


Journal ArticleDOI
TL;DR: In this article, the Matthews correlation coefficient (MCC) was shown to have several advantages over confusion entropy, accuracy, F1 score, balanced accuracy, bookmaker informedness, markedness, and diagnostic odds ratio.
Abstract: Even if measuring the outcome of binary classifications is a pivotal task in machine learning and statistics, no consensus has been reached yet about which statistical rate to employ to this end. In the last century, the computer science and statistics communities have introduced several scores summing up the correctness of the predictions with respect to the ground truth values. Among these scores, the Matthews correlation coefficient (MCC) was shown to have several advantages over confusion entropy, accuracy, F1 score, balanced accuracy, bookmaker informedness, markedness, and diagnostic odds ratio: MCC, in fact, produces a high score only if the majority of the predicted negative data instances and the majority of the positive data instances are correct, and therefore it results being very trustworthy on imbalanced datasets. In this study, we compare MCC with two other popular scores: Cohen’s Kappa, a metric that originated in social sciences, and the Brier score, a strictly proper scoring function which emerged in weather forecasting studies. After explaining the mathematical properties and the relationships between MCC and each of these two rates, we report some use cases where these scores generate different values, which lead to discordant outcomes, where MCC provides a more truthful and informative result. We highlight the reasons why it is more advisable to use MCC rather that Cohen’s Kappa and the Brier score to evaluate binary classifications.

100 citations


Journal ArticleDOI
TL;DR: A haloscope of the QUAX experiment composed of an oxygen-free high thermal conductivity-Cu cavity inside an 8.1 T magnet and cooled to the standard quantum limit was put into operation for the search of galactic axion as discussed by the authors.
Abstract: A haloscope of the QUAX--$a\ensuremath{\gamma}$ experiment composed of an oxygen-free high thermal conductivity-Cu cavity inside an 8.1 T magnet and cooled to $\ensuremath{\sim}200\text{ }\text{ }\mathrm{mK}$ is put in operation for the search of galactic axion with mass ${m}_{a}\ensuremath{\simeq}43\text{ }\text{ }\ensuremath{\mu}\mathrm{eV}$. The power emitted by the resonant cavity is amplified with a Josephson parametric amplifier whose noise fluctuations are at the standard quantum limit. With the data collected in about 1 h at the cavity frequency ${\ensuremath{ u}}_{c}=10.40176\text{ }\text{ }\mathrm{GHz}$, the experiment reaches the sensitivity necessary for the detection of galactic QCD-axion, setting the 90% confidence level limit to the axion-photon coupling ${g}_{a\ensuremath{\gamma}\ensuremath{\gamma}}l0.766\ifmmode\times\else\texttimes\fi{}{10}^{\ensuremath{-}13}\text{ }\text{ }{\mathrm{GeV}}^{\ensuremath{-}1}$.

88 citations


Journal ArticleDOI
TL;DR: A broad spectrum of such information is evaluated in this article, with a view to consolidating the facts and therefrom moving toward a coherent, unified picture of hadron structure and the role that diquark correlations might play.

85 citations


Journal ArticleDOI
TL;DR: This paper presents a survey on current solutions for the deployment of services in remote/rural areas by exploiting satellites, highlighting that low-orbit satellites offer an efficient solution to support long-range services, with a good trade-off in terms of coverage and latency.
Abstract: The Internet of Things (IoT) is expected to bring new opportunities for improving several services for the Society, from transportation to agriculture, from smart cities to fleet management. In this framework, massive connectivity represents one of the key issues. This is especially relevant when IoT systems are expected to cover a large geographical area or a region not reached by terrestrial network connections. In such scenarios, the usage of satellites might represent a viable solution for providing wide area coverage and connectivity in a flexible and affordable manner. Our paper presents a survey on current solutions for the deployment of IoT services in remote/rural areas by exploiting satellites. Several architectures and technical solutions are analyzed, underlining their features and limitations, and real test cases are presented. It has been highlighted that low-orbit satellites offer an efficient solution to support long-range IoT services, with a good trade-off in terms of coverage and latency. Moreover, open issues, new challenges, and innovative technologies have been focused, carefully considering the perimeter that current IoT standardization framework will impose to the practical implementation of future satellite based IoT systems.

79 citations


Journal ArticleDOI
TL;DR: The results show that isolation and tracing can help control re-emerging outbreaks when some conditions are met and the inefficacy of a less privacy-preserving tracing involving second order contacts is observed.
Abstract: Digital contact tracing is a relevant tool to control infectious disease outbreaks, including the COVID-19 epidemic. Early work evaluating digital contact tracing omitted important features and heterogeneities of real-world contact patterns influencing contagion dynamics. We fill this gap with a modeling framework informed by empirical high-resolution contact data to analyze the impact of digital contact tracing in the COVID-19 pandemic. We investigate how well contact tracing apps, coupled with the quarantine of identified contacts, can mitigate the spread in real environments. We find that restrictive policies are more effective in containing the epidemic but come at the cost of unnecessary large-scale quarantines. Policy evaluation through their efficiency and cost results in optimized solutions which only consider contacts longer than 15-20 minutes and closer than 2-3 meters to be at risk. Our results show that isolation and tracing can help control re-emerging outbreaks when some conditions are met: (i) a reduction of the reproductive number through masks and physical distance; (ii) a low-delay isolation of infected individuals; (iii) a high compliance. Finally, we observe the inefficacy of a less privacy-preserving tracing involving second order contacts. Our results may inform digital contact tracing efforts currently being implemented across several countries worldwide.

Journal ArticleDOI
TL;DR: Results suggest that HLA antigens may influence SARS-CoV-2 infection and clinical evolution of COVID-19 and confirm that blood group A individuals are at greater risk of infection, providing clues on the spread of the disease and indications about infection prognosis and vaccination strategies.
Abstract: BACKGROUND: SARS-CoV-2 infection is heterogeneous in clinical presentation and disease evolution. To investigate whether immune response to the virus can be influenced by genetic factors, we compared HLA and AB0 frequencies in organ transplant recipients and waitlisted patients according to presence or absence of SARS-CoV-2 infection. METHODS: A retrospective analysis was performed on an Italian cohort composed by transplanted and waitlisted patients in a January 2002-March 2020 time frame. Data from this cohort were merged with the Italian registry of COVID subjects, evaluating infection status of transplanted and waitlisted patients. A total of 56304 cases were studied with the aim of comparing HLA and AB0 frequencies according to the presence (n=265, COVID) or absence (n=56 039, COVID) of SARS-CoV-2 infection. RESULTS: The cumulative incidence rate of COVID-19 was 0.112% in the Italian population and 0.462% in waitlisted/transplanted patients (OR=4.2, 95%CI [3.7-4.7], P<0.0001). HLA-DRB1*08 was more frequent in COVID (9.7% and 5.2%: OR=1.9, 95%CI [1.2-3.1]; P=0.003; Pc=0.036). In COVID patients, HLA-DRB1*08 was correlated to mortality (6.9% in living vs 17.5% in deceased: OR=2.9, 95%CI [1.15-7.21]; P=0.023). Peptide binding prediction analyses showed that these DRB1*08 alleles were unable to bind any of the viral peptides with high affinity. Lastly, blood group A was more frequent in COVID (45.5%) than COVID patients (39.0%; OR=1.3, 95%CI [1.02-1.66]; P=0.03). CONCLUSIONS: Even though preliminary, these results suggest that HLA antigens may influence SARS-CoV-2 infection and clinical evolution of COVID-19 and confirm that blood group A individuals are at greater risk of infection, providing clues on the spread of the disease and indications about infection prognosis and vaccination strategies.

Journal ArticleDOI
01 Mar 2021
TL;DR: In this paper, a chemical processing technique based on the hydroiodic acid to carefully control the degree of residual oxidation was presented, and the results showed that their oxygen content was finely tuned from 33.6 to 10.7 atom %.
Abstract: The need to recover the graphene properties in terms of electrical and thermal conductivity calls for the application of reduction processes leading to the removal of oxygen atoms from the graphene oxide sheet surface. The recombination of carbon-carbon double bonds causes a partial recovery of the original graphene properties mainly limited by the presence of residual oxygen atoms and lattice defects. However, the loss of polar oxygen-based functional groups renders the material dispersibility rather complicated. In addition, oxygen-containing functional groups are reaction sites useful to further bind active molecules to engineer the reduced graphene sheets. For these reasons, a variety of chemical processes are described in the literature to reduce the graphene oxide. However, it is greatly important to select a chemical process enabling a thin modulation of the residual oxygen content thus tuning the properties of the final product. In this work, we will present a chemical-processing technique based on the hydroiodic acid to carefully control the degree of residual oxidation. Graphene oxides were reduced using hydroiodic acid with concentrations from 0.06 to 0.95 mol L-1. Their properties were characterized in detail and tested, and the results showed that their oxygen content was finely tuned from 33.6 to 10.7 atom %. This allows carefully tailoring the material properties with respect to the desired application, which is exemplified by the variation of the bulk resistance from 92 Ω to 14.8 MΩ of the film from the obtained rGO.

Journal ArticleDOI
TL;DR: On March 11, 2020, Italy imposed a national lockdown to curtail the spread of severe acute respiratory syndrome coronavirus 2 and it is estimated that, 14 days after lockdown, the net reproduction number had dropped below 1 and remained stable at 0.76 (95% CI 0.67–0.85) in all regions for >3 of the following weeks.
Abstract: On March 11, 2020, Italy imposed a national lockdown to curtail the spread of severe acute respiratory syndrome coronavirus 2. We estimate that, 14 days after lockdown, the net reproduction number had dropped below 1 and remained stable at »0.76 (95% CI 0.67-0.85) in all regions for >3 of the following weeks.

Journal ArticleDOI
15 Sep 2021
TL;DR: The first global infodemiology conference was organized during June and July 2020, with a follow-up process from August to October 2020, to review current multidisciplinary evidence, interventions, and practices that can be applied to the COVID-19 infodemic response.
Abstract: Background: An infodemic is an overflow of information of varying quality that surges across digital and physical environments during an acute public health event. It leads to confusion, risk-taking, and behaviors that can harm health and lead to erosion of trust in health authorities and public health responses. Owing to the global scale and high stakes of the health emergency, responding to the infodemic related to the pandemic is particularly urgent. Building on diverse research disciplines and expanding the discipline of infodemiology, more evidence-based interventions are needed to design infodemic management interventions and tools and implement them by health emergency responders. Objective: The World Health Organization organized the first global infodemiology conference, entirely online, during June and July 2020, with a follow-up process from August to October 2020, to review current multidisciplinary evidence, interventions, and practices that can be applied to the COVID-19 infodemic response. This resulted in the creation of a public health research agenda for managing infodemics. Methods: As part of the conference, a structured expert judgment synthesis method was used to formulate a public health research agenda. A total of 110 participants represented diverse scientific disciplines from over 35 countries and global public health implementing partners. The conference used a laddered discussion sprint methodology by rotating participant teams, and a managed follow-up process was used to assemble a research agenda based on the discussion and structured expert feedback. This resulted in a five-workstream frame of the research agenda for infodemic management and 166 suggested research questions. The participants then ranked the questions for feasibility and expected public health impact. The expert consensus was summarized in a public health research agenda that included a list of priority research questions. Results: The public health research agenda for infodemic management has five workstreams: (1) measuring and continuously monitoring the impact of infodemics during health emergencies; (2) detecting signals and understanding the spread and risk of infodemics; (3) responding and deploying interventions that mitigate and protect against infodemics and their harmful effects; (4) evaluating infodemic interventions and strengthening the resilience of individuals and communities to infodemics; and (5) promoting the development, adaptation, and application of interventions and toolkits for infodemic management. Each workstream identifies research questions and highlights 49 high priority research questions. Conclusions: Public health authorities need to develop, validate, implement, and adapt tools and interventions for managing infodemics in acute public health events in ways that are appropriate for their countries and contexts. Infodemiology provides a scientific foundation to make this possible. This research agenda proposes a structured framework for targeted investment for the scientific community, policy makers, implementing organizations, and other stakeholders to consider.

Journal ArticleDOI
TL;DR: A semisupervised CD method that encodes mult itemporal images as a graph via multiscale parcel segmentation that effectively captures the spatial and spectral aspects of the multitemporal images.
Abstract: Most change detection (CD) methods are unsupervised as collecting substantial multitemporal training data is challenging. Unsupervised CD methods are driven by heuristics and lack the capability to learn from data. However, in many real-world applications, it is possible to collect a small amount of labeled data scattered across the analyzed scene. Such a few scattered labeled samples in the pool of unlabeled samples can be effectively handled by graph convolutional network (GCN) that has recently shown good performance in semisupervised single-date analysis, to improve change detection performance. Based on this, we propose a semisupervised CD method that encodes multitemporal images as a graph via multiscale parcel segmentation that effectively captures the spatial and spectral aspects of the multitemporal images. The graph is further processed through GCN to learn a multitemporal model. Information from the labeled parcels is propagated to the unlabeled ones over training iterations. By exploiting the homogeneity of the parcels, the model is used to infer the label at a pixel level. To show the effectiveness of the proposed method, we tested it on a multitemporal Very High spatial Resolution (VHR) data set acquired by Pleiades sensor over Trento, Italy.

Journal ArticleDOI
TL;DR: In this article, a comprehensive review of change detection in very high-spatial-resolution (VHR) images is presented, which mainly includes three aspects: methods, applications, and future directions.
Abstract: Change detection is a vibrant area of research in remote sensing. Thanks to increases in the spatial resolution of remote sensing images, subtle changes at a finer geometrical scale can now be effectively detected. However, change detection from very-high-spatial-resolution (VHR) (≤5 m) remote sensing images is challenging due to limited spectral information, spectral variability, geometric distortion, and information loss. To address these challenges, many change detection algorithms have been developed. However, a comprehensive review of change detection in VHR images is lacking in the existing literature. This review aims to fill the gap and mainly includes three aspects: methods, applications, and future directions.

Journal ArticleDOI
TL;DR: The findings show how solutions based on the proposed architecture can be implemented with only 6% of additional energy budget compared to the normal operations of the IoT devices, and the validation results make the contribution a strong candidate for use in automated and incentive-based irrigation water management systems.

Journal ArticleDOI
TL;DR: A novel unsupervised deep-learning-based CD method that can effectively model contextual information and handle the large number of bands in multispectral HR images is presented.
Abstract: To overcome the limited capability of most state-of-the-art change detection (CD) methods in modeling spatial context of multispectral high spatial resolution (HR) images and exploiting all spectral bands jointly, this letter presents a novel unsupervised deep-learning-based CD method that can effectively model contextual information and handle the large number of bands in multispectral HR images. This is achieved by exploiting all spectral bands after grouping them into spectral-dedicated band groups. To eliminate the necessity of multitemporal training data, the proposed method exploits a data set targeted for image classification to train spectral-dedicated Auxiliary Classifier Generative Adversarial Networks (ACGANs). They are used to obtain pixelwise deep change hypervector from multitemporal images. Each feature in deep change hypervector is analyzed based on the magnitude to identify changed pixels. An ensemble decision fusion strategy is used to combine change information from different features. Experimental results on the urban, Alpine, and agricultural Sentinel-2 data sets confirm the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This is the first study that explores aDeep reinforcement learning model for hyperspectral image analysis, thus opening a new door for future research and showcasing the great potential of deep reinforcement learning in remote sensing applications.
Abstract: Band selection refers to the process of choosing the most relevant bands in a hyperspectral image. By selecting a limited number of optimal bands, we aim at speeding up model training, improving accuracy, or both. It reduces redundancy among spectral bands while trying to preserve the original information of the image. By now, many efforts have been made to develop unsupervised band selection approaches, of which the majorities are heuristic algorithms devised by trial and error. In this article, we are interested in training an intelligent agent that, given a hyperspectral image, is capable of automatically learning policy to select an optimal band subset without any hand-engineered reasoning. To this end, we frame the problem of unsupervised band selection as a Markov decision process, propose an effective method to parameterize it, and finally solve the problem by deep reinforcement learning. Once the agent is trained, it learns a band-selection policy that guides the agent to sequentially select bands by fully exploiting the hyperspectral image and previously picked bands. Furthermore, we propose two different reward schemes for the environment simulation of deep reinforcement learning and compare them in experiments. This, to the best of our knowledge, is the first study that explores a deep reinforcement learning model for hyperspectral image analysis, thus opening a new door for future research and showcasing the great potential of deep reinforcement learning in remote sensing applications. Extensive experiments are carried out on four hyperspectral data sets, and experimental results demonstrate the effectiveness of the proposed method. The code is publicly available.

Journal ArticleDOI
TL;DR: In this paper, higher-order interactions are ubiquitous and, similarly to their pairwise counterparts, characterized by heterogeneous dynamics, with bursty trains of rapidly recurring higherorder events separated by long periods of inactivity.
Abstract: Human social interactions in local settings can be experimentally detected by recording the physical proximity and orientation of people Such interactions, approximating face-to-face communications, can be effectively represented as time varying social networks with links being unceasingly created and destroyed over time Traditional analyses of temporal networks have addressed mostly pairwise interactions, where links describe dyadic connections among individuals However, many network dynamics are hardly ascribable to pairwise settings but often comprise larger groups, which are better described by higher-order interactions Here we investigate the higher-order organizations of temporal social networks by analyzing five publicly available datasets collected in different social settings We find that higher-order interactions are ubiquitous and, similarly to their pairwise counterparts, characterized by heterogeneous dynamics, with bursty trains of rapidly recurring higher-order events separated by long periods of inactivity We investigate the evolution and formation of groups by looking at the transition rates between different higher-order structures We find that in more spontaneous social settings, group are characterized by slower formation and disaggregation, while in work settings these phenomena are more abrupt, possibly reflecting pre-organized social dynamics Finally, we observe temporal reinforcement suggesting that the longer a group stays together the higher the probability that the same interaction pattern persist in the future Our findings suggest the importance of considering the higher-order structure of social interactions when investigating human temporal dynamics

Journal ArticleDOI
TL;DR: In this article, the authors compare the MCC with the diagnostic odds ratio (DOR), a statistical rate employed sometimes in biomedical sciences, and describe the relationships between them, by also taking advantage of an innovative geometrical plot called confusion tetrahedron.
Abstract: To assess the quality of a binary classification, researchers often take advantage of a four-entry contingency table called confusion matrix , containing true positives, true negatives, false positives, and false negatives. To recap the four values of a confusion matrix in a unique score, researchers and statisticians have developed several rates and metrics. In the past, several scientific studies already showed why the Matthews correlation coefficient (MCC) is more informative and trustworthy than confusion-entropy error, accuracy, F1 score, bookmaker informedness, markedness, and balanced accuracy. In this study, we compare the MCC with the diagnostic odds ratio (DOR), a statistical rate employed sometimes in biomedical sciences. After examining the properties of the MCC and of the DOR, we describe the relationships between them, by also taking advantage of an innovative geometrical plot called confusion tetrahedron , presented here for the first time. We then report some use cases where the MCC and the DOR produce discordant outcomes, and explain why the Matthews correlation coefficient is more informative and reliable between the two. Our results can have a strong impact in computer science and statistics, because they clearly explain why the trustworthiness of the information provided by the Matthews correlation coefficient is higher than the one generated by the diagnostic odds ratio.

Journal ArticleDOI
TL;DR: In this article, the authors present a real world case study in order to demonstrate how scalability is positively affected by re-implementing a monolithic architecture (MA) into a microservices architecture (MSA).
Abstract: An increasing interest is growing around the idea of microservices and the promise of improving scalability when compared to monolithic systems. Several companies are evaluating pros and cons of a complex migration. In particular, financial institutions are positioned in a difficult situation due to the economic climate and the appearance of agile competitors that can navigate in a more flexible legal framework and started their business since day one with more agile architectures and without being bounded to outdated technological standard. In this paper, we present a real world case study in order to demonstrate how scalability is positively affected by re-implementing a monolithic architecture (MA) into a microservices architecture (MSA). The case study is based on the FX Core system, a mission critical system of Danske Bank, the largest bank in Denmark and one of the leading financial institutions in Northern Europe. The technical problem that has been addressed and solved in this paper is the identification of a repeatable migration process that can be used to convert a real world Monolithic architecture into a Microservices architecture in the specific setting of financial domain, typically characterized by legacy systems and batch-based processing on heterogeneous data sources.

Journal ArticleDOI
TL;DR: In this article, the authors present the principles of operation of resistive AC-Coupled Silicon Detectors (RSDs) and measurements of the temporal and spatial resolutions using a combined analysis of laser and beam test data.
Abstract: This paper presents the principles of operation of Resistive AC-Coupled Silicon Detectors (RSDs) and measurements of the temporal and spatial resolutions using a combined analysis of laser and beam test data. RSDs are a new type of n-in-p silicon sensor based on the Low-Gain Avalanche Diode (LGAD) technology, where the n + implant has been designed to be resistive, and the read-out is obtained via AC-coupling. The truly innovative feature of RSD is that the signal generated by an impinging particle is shared isotropically among multiple read-out pads without the need for floating electrodes or an external magnetic field. Careful tuning of the coupling oxide thickness and the n + doping profile is at the basis of the successful functioning of this device. Several RSD matrices with different pad width-pitch geometries have been extensively tested with a laser setup in the Laboratory for Innovative Silicon Sensors in Torino, while a smaller set of devices have been tested at the Fermilab Test Beam Facility with a 120 GeV/c proton beam. The measured spatial resolution ranges between 2 . 5 μ m for 70–100 pad-pitch geometry and 17 μ m with 200–500 matrices, a factor of 10 better than what is achievable in binary read-out ( b i n s i z e ∕ 12 ). Beam test data show a temporal resolution of ∼ 40 ps for 200 μ m pitch devices, in line with the best performances of LGAD sensors at the same gain.

Journal ArticleDOI
TL;DR: In this article, the weak-coupling expansion of the dense QCD equation of state and the contribution arising from non-Abelian interactions among long-wavelength, dynamically screened gluonic fields are considered.
Abstract: Accurate knowledge of the thermodynamic properties of zero-temperature, high-density quark matter plays an integral role in attempts to constrain the behavior of the dense QCD matter found inside neutron-star cores, irrespective of the phase realized inside the stars. In this Letter, we consider the weak-coupling expansion of the dense QCD equation of state and compute the next-to-next-to-next-to-leading-order contribution arising from the non-Abelian interactions among long-wavelength, dynamically screened gluonic fields. Accounting for these interactions requires an all-loop resummation, which can be performed using hard-thermal-loop (HTL) kinematic approximations. Concretely, we perform a full two-loop computation using the HTL effective theory, valid for the long-wavelength, or soft, modes. We find that the soft sector is well behaved within cold quark matter, contrary to the case encountered at high temperatures, and find that the new contribution decreases the renormalization-scale dependence of the equation of state at high density.

Proceedings ArticleDOI
10 Jan 2021
TL;DR: 3D local deep descriptors are extracted, canonicalised with respect to their estimated local reference frame and encoded into rotation-invariant compact descriptors by a PointNet-based deep neural network to be used to register point clouds without requiring an initial alignment.
Abstract: We present a simple but yet effective method for learning distinctive 3D local deep descriptors (DIPs) that can be used to register point clouds without requiring an initial alignment. Point cloud patches are extracted, canonicalised with respect to their estimated local reference frame and encoded into rotation-invariant compact descriptors by a PointNet-based deep neural network. DIPs can effectively generalise across different sensor modalities because they are learnt end-to-end from locally and randomly sampled points. Because DIPs encode only local geometric information, they are robust to clutter, occlusions and missing regions. We evaluate and compare DIPs against alternative hand-crafted and deep descriptors on several indoor and outdoor datasets consiting of point clouds reconstructed using different sensors. Results show that DIPs (i) achieve comparable results to the state-of-the-art on RGB-D indoor scenes (3DMatch dataset), (ii) outperform state-of-the-art by a large margin on laser-scanner outdoor scenes (ETH dataset), and (iii) generalise to indoor scenes reconstructed with the Visual-SLAM system of Android ARCore. Source code: https://github.com/fabiopoiesi/dip.

Journal ArticleDOI
TL;DR: Atena as discussed by the authors is a psycho-educational chatbot supporting healthy coping with stress and anxiety among a population of university students during the COVID-19 pandemic, where participants were asked to complete web-based versions of the 7-item generalized anxiety disorder scale (GAD-7), the 10-item perceived stress scale (PSS-10), and the Five-Facet Mindfulness Questionnaire (FFMQ) at baseline and post-intervention to assess effectiveness.
Abstract: Background: University students are increasingly reporting common mental health problems, such as stress, anxiety, and depression, and they frequently face barriers to seeking psychological support because of stigma, cost, and availability of mental health services. This issue is even more critical in the challenging time of the COVID-19 pandemic. Digital mental health interventions, such as those delivered via chatbots on mobile devices, offer the potential to achieve scalability of healthy-coping interventions by lowering cost and supporting prevention. Objective: The goal of this study was to conduct a proof-of-concept evaluation measuring the engagement and effectiveness of Atena, a psychoeducational chatbot supporting healthy coping with stress and anxiety, among a population of university students. Methods: In a proof-of-concept study, 71 university students were recruited during the COVID-19 pandemic; 68% (48/71) were female, they were all in their first year of university, and their mean age was 20.6 years (SD 2.4). Enrolled students were asked to use the Atena psychoeducational chatbot for 4 weeks (eight sessions; two per week), which provided healthy-coping strategies based on cognitive behavioral therapy, positive psychology, and mindfulness techniques. The intervention program consisted of conversations combined with audiovisual clips delivered via the Atena chatbot. Participants were asked to complete web-based versions of the 7-item Generalized Anxiety Disorder scale (GAD-7), the 10-item Perceived Stress Scale (PSS-10), and the Five-Facet Mindfulness Questionnaire (FFMQ) at baseline and postintervention to assess effectiveness. They were also asked to complete the User Engagement Scale–Short Form at week 2 to assess engagement with the chatbot and to provide qualitative comments on their overall experience with Atena postintervention. Results: Participants engaged with the Atena chatbot an average of 78 (SD 24.8) times over the study period. A total of 61 out of 71 (86%) participants completed the first 2 weeks of the intervention and provided data on engagement (10/71, 14% attrition). A total of 41 participants out of 71 (58%) completed the full intervention and the postintervention questionnaires (30/71, 42% attrition). Results from the completer analysis showed a significant decrease in anxiety symptoms for participants in more extreme GAD-7 score ranges (t39=0.94; P=.009) and a decrease in stress symptoms as measured by the PSS-10 (t39=2.00; P=.05) for all participants postintervention. Participants also improved significantly in the describing and nonjudging facets, based on their FFMQ subscale scores, and asked for some improvements in the user experience with the chatbot. Conclusions: This study shows the benefit of deploying a digital healthy-coping intervention via a chatbot to support university students experiencing higher levels of distress. While findings collected during the COVID-19 pandemic show promise, further research is required to confirm conclusions.

Journal ArticleDOI
TL;DR: These results allow to prove, in the case of certain Sobolev kernels, that the algorithms have optimal stability and optimal convergence rates, including for functions outside the native space of the kernel.

Proceedings ArticleDOI
16 Jun 2021
TL;DR: The authors proposed a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space in which both intra and inter-domain interpolations correspond to gradual changes in the generated images and the content of the source image is better preserved during the translation.
Abstract: Image-to-Image (I2I) multi-domain translation models are usually evaluated also using the quality of their semantic interpolation results. However, state-of-the-art models frequently show abrupt changes in the image appearance during interpolation, and usually perform poorly in interpolations across domains. In this paper, we propose a new training protocol based on three specific losses which help a translation network to learn a smooth and disentangled latent style space in which: 1) Both intra- and inter-domain interpolations correspond to gradual changes in the generated images and 2) The content of the source image is better preserved during the translation. Moreover, we propose a novel evaluation metric to properly measure the smoothness of latent style space of I2I translation models. The proposed method can be plugged in existing translation approaches, and our extensive experiments on different datasets show that it can significantly boost the quality of the generated images and the graduality of the interpolations.

Journal ArticleDOI
TL;DR: An in-depth analysis of the use of convolutional neural networks (CNN) for burned area (BA) mapping combining radar and optical datasets acquired by Sentinel-1 and Sentinel-2 on-board sensors, respectively significantly improves existing methods based on either sensor type.