scispace - formally typeset
Search or ask a question

Showing papers by "fondazione bruno kessler published in 2019"


Proceedings Article
01 Jan 2019
TL;DR: This framework decouple appearance and motion information using a self-supervised formulation and uses a representation consisting of a set of learned keypoints along with their local affine transformations to support complex motions.
Abstract: Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video. Our framework addresses this problem without using any annotation or prior information about the specific object to animate. Once trained on a set of videos depicting objects of the same category (e.g. faces, human bodies), our method can be applied to any object of this class. To achieve this, we decouple appearance and motion information using a self-supervised formulation. To support complex motions, we use a representation consisting of a set of learned keypoints along with their local affine transformations. A generator network models occlusions arising during target motions and combines the appearance extracted from the source image and the motion derived from the driving video. Our framework scores best on diverse benchmarks and on a variety of object categories.

441 citations


Journal ArticleDOI
Roel Aaij, C. Abellán Beteta1, Bernardo Adeva2, Marco Adinolfi3  +877 moreInstitutions (60)
TL;DR: In this paper, a new pentaquark state, P_{c}(4312)+, was discovered with a statistical significance of 7.3σ in a data sample of Λ_{b}^{0}→J/ψpK^{-} decays, which is an order of magnitude larger than that previously analyzed by the LHCb Collaboration.
Abstract: A narrow pentaquark state, P_{c}(4312)^{+}, decaying to J/ψp, is discovered with a statistical significance of 7.3σ in a data sample of Λ_{b}^{0}→J/ψpK^{-} decays, which is an order of magnitude larger than that previously analyzed by the LHCb Collaboration. The P_{c}(4450)^{+} pentaquark structure formerly reported by LHCb is confirmed and observed to consist of two narrow overlapping peaks, P_{c}(4440)^{+} and P_{c}(4457)^{+}, where the statistical significance of this two-peak interpretation is 5.4σ. The proximity of the Σ_{c}^{+}D[over ¯]^{0} and Σ_{c}^{+}D[over ¯]^{*0} thresholds to the observed narrow peaks suggests that they play an important role in the dynamics of these states.

402 citations


Proceedings ArticleDOI
01 Jun 2019
TL;DR: MuST-C is created, a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 8 languages and an empirical verification of its quality and SLT results computed with a state-of-the-art approach on each language direction.
Abstract: Current research on spoken language translation (SLT) has to confront with the scarcity of sizeable and publicly available training corpora. This problem hinders the adoption of neural end-to-end approaches, which represent the state of the art in the two parent tasks of SLT: automatic speech recognition and machine translation. To fill this gap, we created MuST-C, a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 8 languages. For each target language, MuST-C comprises at least 385 hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations. Together with a description of the corpus creation methodology (scalable to add new data and cover new languages), we provide an empirical verification of its quality and SLT results computed with a state-of-the-art approach on each language direction.

320 citations


Journal ArticleDOI
TL;DR: A novel unsupervised context-sensitive framework—deep change vector analysis (DCVA)—for CD in multitemporal VHR images that exploit convolutional neural network (CNN) features is proposed and experimental results on mult itemporal data sets of Worldview-2, Pleiades, and Quickbird images confirm the effectiveness of the proposed method.
Abstract: Change detection (CD) in multitemporal images is an important application of remote sensing. Recent technological evolution provided very high spatial resolution (VHR) multitemporal optical satellite images showing high spatial correlation among pixels and requiring an effective modeling of spatial context to accurately capture change information. Here, we propose a novel unsupervised context-sensitive framework—deep change vector analysis (DCVA)—for CD in multitemporal VHR images that exploit convolutional neural network (CNN) features. To have an unsupervised system, DCVA starts from a suboptimal pretrained multilayered CNN for obtaining deep features that can model spatial relationship among neighboring pixels and thus complex objects. An automatic feature selection strategy is employed layerwise to select features emphasizing both high and low prior probability change information. Selected features from multiple layers are combined into a deep feature hypervector providing a multiscale scene representation. The use of the same pretrained CNN for semantic segmentation of single images enables us to obtain coherent multitemporal deep feature hypervectors that can be compared pixelwise to obtain deep change vectors that also model spatial context information. Deep change vectors are analyzed based on their magnitude to identify changed pixels. Then, deep change vectors corresponding to identified changed pixels are binarized to obtain a compressed binary deep change vectors that preserve information about the direction (kind) of change. Changed pixels are analyzed for multiple CD based on the binary features, thus implicitly using the spatial information. Experimental results on multitemporal data sets of Worldview-2, Pleiades, and Quickbird images confirm the effectiveness of the proposed method.

310 citations


Journal ArticleDOI
TL;DR: An increase in remote sensing and ancillary data sets opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand.
Abstract: The recent, sharp increase in the availability of data captured by different sensors, combined with their considerable heterogeneity, poses a serious challenge for the effective and efficient processing of remotely sensed data. Such an increase in remote sensing and ancillary data sets, however, opens up the possibility of utilizing multimodal data sets in a joint manner to further improve the performance of the processing approaches with respect to applications at hand. Multisource data fusion has, therefore, received enormous attention from researchers worldwide for a wide variety of applications. Moreover, thanks to the revisit capability of several

226 citations


Journal ArticleDOI
TL;DR: A comprehensive overview of the development and uptake of NLP methods applied to free-text clinical notes related to chronic diseases is provided, including the investigation of challenges faced by NLP methodologies in understanding clinical narratives.
Abstract: Background: Novel approaches that complement and go beyond evidence-based medicine are required in the domain of chronic diseases, given the growing incidence of such conditions on the worldwide population. A promising avenue is the secondary use of electronic health records (EHRs), where patient data are analyzed to conduct clinical and translational research. Methods based on machine learning to process EHRs are resulting in improved understanding of patient clinical trajectories and chronic disease risk prediction, creating a unique opportunity to derive previously unknown clinical insights. However, a wealth of clinical histories remains locked behind clinical narratives in free-form text. Consequently, unlocking the full potential of EHR data is contingent on the development of natural language processing (NLP) methods to automatically transform clinical text into structured clinical data that can guide clinical decisions and potentially delay or prevent disease onset. Objective: The goal of the research was to provide a comprehensive overview of the development and uptake of NLP methods applied to free-text clinical notes related to chronic diseases, including the investigation of challenges faced by NLP methodologies in understanding clinical narratives. Methods: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed and searches were conducted in 5 databases using “clinical notes,” “natural language processing,” and “chronic disease” and their variations as keywords to maximize coverage of the articles. Results: Of the 2652 articles considered, 106 met the inclusion criteria. Review of the included papers resulted in identification of 43 chronic diseases, which were then further classified into 10 disease categories using the International Classification of Diseases, 10th Revision. The majority of studies focused on diseases of the circulatory system (n=38) while endocrine and metabolic diseases were fewest (n=14). This was due to the structure of clinical records related to metabolic diseases, which typically contain much more structured data, compared with medical records for diseases of the circulatory system, which focus more on unstructured data and consequently have seen a stronger focus of NLP. The review has shown that there is a significant increase in the use of machine learning methods compared to rule-based approaches; however, deep learning methods remain emergent (n=3). Consequently, the majority of works focus on classification of disease phenotype with only a handful of papers addressing extraction of comorbidities from the free text or integration of clinical notes with structured data. There is a notable use of relatively simple methods, such as shallow classifiers (or combination with rule-based methods), due to the interpretability of predictions, which still represents a significant issue for more complex methods. Finally, scarcity of publicly available data may also have contributed to insufficient development of more advanced methods, such as extraction of word embeddings from clinical notes. Conclusions: Efforts are still required to improve (1) progression of clinical NLP methods from extraction toward understanding; (2) recognition of relations among entities rather than entities in isolation; (3) temporal extraction to understand past, current, and future clinical events; (4) exploitation of alternative sources of clinical knowledge; and (5) availability of large-scale, de-identified clinical corpora.

225 citations


Journal ArticleDOI
TL;DR: To fully exploit the available multitemporal HS images and their rich information content in change detection (CD), it is necessary to develop advanced automatic techniques that can address the complexity of the extraction of change information in an HS space.
Abstract: The expected increasing availability of remote sensing satellite hyperspectral (HS) images provides an important and unique data source for Earth observation (EO). HS images are characterized by a detailed spectral sampling (i.e., very high spectral resolution) over a wide spectral wavelength range, which makes it possible to monitor land-cover dynamics at a fine spectral scale. This is due to its capability of detecting subtle spectral variations in multitemporal images associated with land-cover changes that are not detectable in traditional multispectral (MS) images because of their limited spectral resolution (i.e., sufficient for representing only abrupt, strong changes in the spectral signature, as a rule). To fully exploit the available multitemporal HS images and their rich information content in change detection (CD), it is necessary to develop advanced automatic techniques that can address the complexity of the extraction of change information in an HS space. This article provides a comprehensive overview of the CD problem in HS images, as well as a survey on the main CD techniques available for multitemporal HS images. We review both widely used methods and new techniques proposed in the recent literature. The basic concepts, categories, open issues, and challenges related to CD in HS images are discussed and analyzed in detail. Experimental results obtained using state-of-the-art approaches are shown, to illustrate relevant concepts and problems.

215 citations


Journal ArticleDOI
Shuang-Nan Zhang1, Andrea Santangelo1, Andrea Santangelo2, Marco Feroci3  +150 moreInstitutions (21)
TL;DR: The enhanced X-ray Timing and Polarimetry mission—eXTP is a space science mission designed to study fundamental physics under extreme conditions of density, gravity and magnetism and will be a very powerful observatory for astrophysics that will provide observations of unprecedented quality on a variety of galactic and extragalactic objects.
Abstract: In this paper we present the enhanced X-ray Timing and Polarimetry mission—eXTP. eXTP is a space science mission designed to study fundamental physics under extreme conditions of density, gravity and magnetism. The mission aims at determining the equation of state of matter at supra-nuclear density, measuring effects of QED, and understanding the dynamics of matter in strong-field gravity. In addition to investigating fundamental physics, eXTP will be a very powerful observatory for astrophysics that will provide observations of unprecedented quality on a variety of galactic and extragalactic objects. In particular, its wide field monitoring capabilities will be highly instrumental to detect the electro-magnetic counterparts of gravitational wave sources. The paper provides a detailed description of: (1) the technological and technical aspects, and the expected performance of the instruments of the scientific payload; (2) the elements and functions of the mission, from the spacecraft to the ground segment.

206 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the structure and electrical model of a single photon avalanche diode (SPAD), and the integration in an array, i.e., the SiPM.
Abstract: The silicon-photomultiplier (SiPM) is becoming the device of choice for different applications, for example in fast timing like in time of flight positron emission tomography (TOF-PET) and in high energy physics (HEP). It is also becoming a choice in many single-photon or few-photon based applications, like for spectroscopy, quantum experiments and distance measurements (LIDAR). In order to fully benefit from the good performance of the SiPM, in particular its sensitivity, the dynamic range and its intrinsically fast timing properties it is necessary to understand, quantitatively describe and simulate the various parameters concerned. These analyses consider the structure and the electrical model of a single photon avalanche diode (SPAD), i.e. the SiPM microcell, and the integration in an array, i.e. the SiPM. Additionally, for several applications a more phenomenological and complete view on SiPMs has to be done, e.g. photon detection efficiency, single photon time resolution, SiPM signal response, gain fluctuation, dark count rate, afterpulse, prompt and delayed optical crosstalk. These quantities of SiPMs can strongly influence the time and energy resolution, for example in PET and HEP. Having a complete overview on all of these parameters allows to draw conclusions on how best performances can be achieved for the various needs of different applications.

162 citations


Proceedings ArticleDOI
20 Jun 2019
TL;DR: This paper proposes LSTA as a mechanism to focus on features from spatial relevant parts while attention is being tracked smoothly across the video sequence, achieving state-of-the-art performance on four standard benchmarks.
Abstract: Egocentric activity recognition is one of the most challenging tasks in video analysis. It requires a fine-grained discrimination of small objects and their manipulation. While some methods base on strong supervision and attention mechanisms, they are either annotation consuming or do not take spatio-temporal patterns into account. In this paper we propose LSTA as a mechanism to focus on features from spatial relevant parts while attention is being tracked smoothly across the video sequence. We demonstrate the effectiveness of LSTA on egocentric activity recognition with an end-to-end trainable two-stream architecture, achieving state-of-the-art performance on four standard benchmarks.

143 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide an analysis of the mass budget of the pion and proton in QCD and discuss the special role of the kaon, which lies near the boundary between dominance of strong and Higgs mass-generation mechanisms.
Abstract: Understanding the origin and dynamics of hadron structure and in turn that of atomic nuclei is a central goal of nuclear physics. This challenge entails the questions of how does the roughly 1GeV mass-scale that characterizes atomic nuclei appear; why does it have the observed value; and, enigmatically, why are the composite Nambu-Goldstone (NG) bosons in quantum chromodynamics (QCD) abnormally light in comparison? In this perspective, we provide an analysis of the mass budget of the pion and proton in QCD; discuss the special role of the kaon, which lies near the boundary between dominance of strong and Higgs mass-generation mechanisms; and explain the need for a coherent effort in QCD phenomenology and continuum calculations, in exa-scale computing as provided by lattice QCD, and in experiments to make progress in understanding the origins of hadron masses and the distribution of that mass within them. We compare the unique capabilities foreseen at the electron-ion collider (EIC) with those at the hadron-electron ring accelerator (HERA), the only previous electron-proton collider; and describe five key experimental measurements, enabled by the EIC and aimed at delivering fundamental insights that will generate concrete answers to the questions of how mass and structure arise in the pion and kaon, the Standard Model's NG modes, whose surprisingly low mass is critical to the evolution of our Universe.

Proceedings ArticleDOI
15 Sep 2019
TL;DR: An adaptation of Transformer to end-to-end SLT that consists in downsampling the input with convolutional neural networks to make the training process feasible on GPUs, modeling the bidimensional nature of a spectrogram, and adding a distance penalty to the attention so to bias it towards local context is presented.
Abstract: Neural end-to-end architectures for sequence-to-sequence learning represent the state of the art in machine translation (MT) and speech recognition (ASR). Their use is also promising for end-to-end spoken language translation (SLT), which combines the main challenges of ASR and MT. Exploiting existing neural architectures, however, requires task-specific adaptations. A network that has obtained state-of-the-art results in MT with reduced training time is Transformer. However, its direct application to speech input is hindered by two limitations of the self-attention network on which it is based: quadratic memory complexity and no explicit modeling of short-range dependencies between input features. High memory complexity poses constraints to the size of models trainable with a GPU, while the inadequate modeling of local dependencies harms final translation quality. This paper presents an adaptation of Transformer to end-to-end SLT that consists in: i) downsampling the input with convolutional neural networks to make the training process feasible on GPUs, ii) modeling the bidimensional nature of a spectrogram, and iii) adding a distance penalty to the attention, so to bias it towards local context. SLT experiments on 8 language directions show that, with our adaptation, Transformer outperforms a strong RNN-based baseline with a significant reduction in training time.

Journal ArticleDOI
TL;DR: In this survey, the most relevant structure learning algorithms that have been proposed in the literature are reviewed according to the approach they follow for solving the problem and alternatives for handling missing data and continuous variable are shown.
Abstract: A necessary step in the development of artificial intelligence is to enable a machine to represent how the world works, building an internal structure from data. This structure should hold a good trade-off between expressive power and querying efficiency. Bayesian networks have proven to be an effective and versatile tool for the task at hand. They have been applied to modeling knowledge in a variety of fields, ranging from bioinformatics to law, from image processing to economic risk analysis. A crucial aspect is learning the dependency graph of a Bayesian network from data. This task, called structure learning, is NP-hard and is the subject of intense, cutting-edge research. In short, it can be thought of as choosing one graph over the many candidates, grounding our reasoning over a collection of samples of the distribution generating the data. The number of possible graphs increases very quickly at the increase in the number of variables. Searching in this space, and selecting a graph over the others, becomes quickly burdensome. In this survey, we review the most relevant structure learning algorithms that have been proposed in the literature. We classify them according to the approach they follow for solving the problem and we also show alternatives for handling missing data and continuous variable. An extensive review of existing software tools is also given.

Journal ArticleDOI
TL;DR: The proposed matching framework has been evaluated using many different types of multimodal images, and the results demonstrate its superior matching performance with respect to the state-of-the-art methods.
Abstract: While image matching has been studied in remote sensing community for decades, matching multimodal data [e.g., optical, light detection and ranging (LiDAR), synthetic aperture radar (SAR), and map] remains a challenging problem because of significant nonlinear intensity differences between such data. To address this problem, we present a novel fast and robust template matching framework integrating local descriptors for multimodal images. First, a local descriptor [such as histogram of oriented gradient (HOG) and local self-similarity (LSS) or speeded-up robust feature (SURF)] is extracted at each pixel to form a pixelwise feature representation of an image. Then, we define a fast similarity measure based on the feature representation using the fast Fourier transform (FFT) in the frequency domain. A template matching strategy is employed to detect correspondences between images. In this procedure, we also propose a novel pixelwise feature representation using orientated gradients of images, which is named channel features of orientated gradients (CFOG). This novel feature is an extension of the pixelwise HOG descriptor with superior performance in image matching and computational efficiency. The major advantages of the proposed matching framework include: 1) structural similarity representation using the pixelwise feature description and 2) high computational efficiency due to the use of FFT. The proposed matching framework has been evaluated using many different types of multimodal images, and the results demonstrate its superior matching performance with respect to the state-of-the-art methods.

Journal ArticleDOI
TL;DR: In this article, the authors present a review of the use of machine learning methods compared to rule-based approaches in clinical NLP, showing that the majority of works focus on classification of disease phenotype with only a handful of papers addressing extraction of comorbidities from free text or integration of clinical notes with structured data.
Abstract: Of the 2652 articles considered, 106 met the inclusion criteria. Review of the included papers resulted in identification of 43 chronic diseases, which were then further classified into 10 disease categories using ICD-10. The majority of studies focused on diseases of the circulatory system (n=38) while endocrine and metabolic diseases were fewest (n=14). This was due to the structure of clinical records related to metabolic diseases, which typically contain much more structured data, compared with medical records for diseases of the circulatory system, which focus more on unstructured data and consequently have seen a stronger focus of NLP. The review has shown that there is a significant increase in the use of machine learning methods compared to rule-based approaches; however, deep learning methods remain emergent (n=3). Consequently, the majority of works focus on classification of disease phenotype with only a handful of papers addressing extraction of comorbidities from the free text or integration of clinical notes with structured data. There is a notable use of relatively simple methods, such as shallow classifiers (or combination with rule-based methods), due to the interpretability of predictions, which still represents a significant issue for more complex methods. Finally, scarcity of publicly available data may also have contributed to insufficient development of more advanced methods, such as extraction of word embeddings from clinical notes. Further efforts are still required to improve (1) progression of clinical NLP methods from extraction toward understanding; (2) recognition of relations among entities rather than entities in isolation; (3) temporal extraction to understand past, current, and future clinical events; (4) exploitation of alternative sources of clinical knowledge; and (5) availability of large-scale, de-identified clinical corpora.

Journal ArticleDOI
14 Jan 2019-Sensors
TL;DR: A review on the latest SiPM technologies developed at Fondazione Bruno Kessler, characterized by a peak detection efficiency in the near-UV and customized according to the needs of different applications, is presented.
Abstract: Different applications require different customizations of silicon photomultiplier (SiPM) technology. We present a review on the latest SiPM technologies developed at Fondazione Bruno Kessler (FBK, Trento), characterized by a peak detection efficiency in the near-UV and customized according to the needs of different applications. Original near-UV sensitive, high-density SiPMs (NUV-HD), optimized for Positron Emission Tomography (PET) application, feature peak photon detection efficiency (PDE) of 63% at 420 nm with a 35 um cell size and a dark count rate (DCR) of 100 kHz/mm2. Correlated noise probability is around 25% at a PDE of 50% at 420 nm. It provides a coincidence resolving time (CRT) of 100 ps FWHM (full width at half maximum) in the detection of 511 keV photons, when used for the readout of LYSO(Ce) scintillator (Cerium-doped lutetium-yttrium oxyorthosilicate) and down to 75 ps FWHM with LSO(Ce:Ca) scintillator (Cerium and Calcium-doped lutetium oxyorthosilicate). Starting from this technology, we developed three variants, optimized according to different sets of specifications. NUV-HD–LowCT features a 60% reduction of direct crosstalk probability, for applications such as Cherenkov telescope array (CTA). NUV-HD–Cryo was optimized for cryogenic operation and for large photosensitive areas. The reference application, in this case, is the readout of liquid, noble-gases scintillators, such as liquid Argon. Measurements at 77 K showed a remarkably low value of the DCR of a few mHz/mm2. Finally, vacuum-UV (VUV)-HD features an increased sensitivity to VUV light, aiming at direct detection of photons below 200 nm. PDE in excess of 20% at 175 nm was measured in liquid Xenon. In the paper, we discuss the specifications on the SiPM related to different types of applications, the SiPM design challenges and process optimizations, and the results from the experimental characterization of the different, NUV-sensitive technologies developed at FBK.

Journal ArticleDOI
TL;DR: The authors review recent progress into this paradigm shift that drives the creation of a network theory based fundamentally on quantum effects, pinpointing the similarities and the differences found at the intersection of these two fields.
Abstract: Recent progress in applying complex network theory to problems in quantum information has resulted in a beneficial cross-over. Complex network methods have successfully been applied to transport and entanglement models while information physics is setting the stage for a theory of complex systems with quantum information-inspired methods. Novel quantum induced effects have been predicted in random graphs—where edges represent entangled links—and quantum computer algorithms have been proposed to offer enhancement for several network problems. Here we review the results at the cutting edge, pinpointing the similarities and the differences found at the intersection of these two fields. Quantum communication and computing is now in a data-intensive domain where a classical network describing a quantum system seems no longer sufficient to yield a generalization of complex networks methods to the quantum domain. The authors review recent progress into this paradigm shift that drives the creation of a network theory based fundamentally on quantum effects.

Journal ArticleDOI
TL;DR: An automated approach based on a deep end-to-end 2D convolutional neural network for slice-based segmentation of 3D volumetric data for segmenting multiple sclerosis lesions from multi-modal brain magnetic resonance images is presented.

Journal ArticleDOI
TL;DR: In this paper, the authors presented the result of a search for galactic axions using a haloscope based on a $36\,\mbox{cm}^3$ NbTi superconducting cavity.
Abstract: To account for the dark matter content in our Universe, post-inflationary scenarios predict for the QCD axion a mass in the range $(10-10^3)\,\mu\mbox{eV}$. Searches with haloscope experiments in this mass range require the monitoring of resonant cavity modes with frequency above 5\,GHz, where several experimental limitations occur due to linear amplifiers, small volumes, and low quality factors of Cu resonant cavities. In this paper we deal with the last issue, presenting the result of a search for galactic axions using a haloscope based on a $36\,\mbox{cm}^3$ NbTi superconducting cavity. The cavity worked at $T=4\,\mbox{K}$ in a 2\,T magnetic field and exhibited a quality factor $Q_0= 4.5\times10^5$ for the TM010 mode at 9\,GHz. With such values of $Q$ the axion signal is significantly increased with respect to copper cavity haloscopes. Operating this setup we set the limit $g_{a\gamma\gamma}<1.03\times10^{-12}\,\mbox{GeV}^{-1}$ on the axion photon coupling for a mass of about 37\,$\mu$eV. A comprehensive study of the NbTi cavity at different magnetic fields, temperatures, and frequencies is also presented.

Journal ArticleDOI
TL;DR: In this paper, the radiation resistance of 50-micron thick Low Gain Avalanche Diodes (LGADs) manufactured at the Fondazione Bruno Kessler (FBK) employing different dopings in the gain layer was reported.
Abstract: In this paper, we report on the radiation resistance of 50-micron thick Low Gain Avalanche Diodes (LGAD) manufactured at the Fondazione Bruno Kessler (FBK) employing different dopings in the gain layer LGADs with a gain layer made of Boron, Boron low-diffusion, Gallium, Carbonated Boron and Carbonated Gallium have been designed and successfully produced at FBK These sensors have been exposed to neutron fluences up to ϕ n ∼ 3 ⋅ 1 0 16 n ∕ c m 2 and to proton fluences up to ϕ p ∼ 9 ⋅ 1 0 15 p ∕ c m 2 to test their radiation resistance The experimental results show that Gallium-doped LGAD are more heavily affected by the initial acceptor removal mechanism than those doped with Boron, while the addition of Carbon reduces this effect both for Gallium and Boron doping The Boron low-diffusion gain layer shows a higher radiation resistance than that of standard Boron implant, indicating a dependence of the initial acceptor removal mechanism upon the implant density

Journal ArticleDOI
TL;DR: A direct comparison of three different numerical analytic continuation methods: the Maximum Entropy Method, the Backus–Gilbert method and the Schlessinger point or Resonances Via Pade method is presented.

Journal ArticleDOI
TL;DR: Evidence for the attractive effect of the Σ_{c}^{+}D[over ¯]^{0} channel, which is not strong enough, however, to form a bound state, is found by exploring several amplitude parametrizations.
Abstract: We study the nature of the new signal reported by LHCb in the $J/\ensuremath{\psi}p$ spectrum. Based on the $S$-matrix principles, we perform a minimum-bias analysis of the underlying reaction amplitude, focusing on the analytic properties that can be related to the microscopic origin of the ${P}_{c}(4312{)}^{+}$ peak. By exploring several amplitude parametrizations, we find evidence for the attractive effect of the ${\mathrm{\ensuremath{\Sigma}}}_{c}^{+}{\overline{D}}^{0}$ channel, which is not strong enough, however, to form a bound state.

Journal ArticleDOI
TL;DR: A literature review of human–computer interaction works on wearable systems for sports identified five themes across the papers: the different research perspectives, the type of sports and sportspeople, the roles of wearables in sports, their wearability, and the different types of feedback.
Abstract: This paper presents a literature review of human–computer interaction works on wearable systems for sports. We selected a corpus of 57 papers and analyzed them through the grounded theory for literature review approach. We identified five themes across the papers: the different research perspectives, the type of sports and sportspeople, the roles of wearables in sports, their wearability, and the different types of feedback. These themes helped us in delineating opportunities for future research: the investigation of different form factors and types of feedback; the consideration of different sportspeople and collaborative tasks; the need of pushing the boundaries of the sports domain; the exploration of the evolution of sports; the interconnection of different devices; and the increase of methodological rigor.

Journal ArticleDOI
TL;DR: In this paper, the authors give an overview of the main properties and technological implementation of densely packed Single-photon Avalanche Diode arrays, which are commonly known as Silicon Photomultipliers, or SiPMs.
Abstract: In this paper, we give an overview of the main properties and technological implementation of densely packed Single-photon Avalanche Diode arrays, which are commonly known as Silicon Photomultipliers, or SiPMs. These detectors feature high internal gain, single-photon sensitivity, a high Photon Detection Efficiency, proportional response to weak and fast light flashes, excellent timing resolution, low bias voltage, ruggedness and insensitivity to magnetic field. They compare favorably to the traditional Photomultiplier Tube in several applications. In this overview paper, we go through the SPAD/SiPM theory of operation, the modern SiPM implementations and the typical technological options to build the sensor. This is done in conjunction with the description of the main SiPM parameters, such as the Photon Detection Efficiency, the electrical properties, the primary and correlated noise sources and the Single Photon Time Resolution.

Journal ArticleDOI
TL;DR: In this article, the authors use the rapidity of the dense target (which corresponds to Bjorken x) instead of that of the dilute projectile as an evolution time.
Abstract: The standard formulation of the high-energy evolution in perturbative QCD, based on the Balitsky-Kovchegov equation, is known to suffer from severe instabilities associated with radiative corrections enhanced by double transverse logarithms, which occur in all orders starting with the next-to-leading one. Over the last years, several methods have been devised to resum such corrections by enforcing the time-ordering of the successive gluon emissions. We observe that the instability problem is not fully cured by these methods: various prescriptions for performing the resummation lead to very different physical results and thus lack of predictive power. We argue that this problem can be avoided by using the rapidity of the dense target (which corresponds to Bjorken x) instead of that of the dilute projectile as an evolution time. This automatically ensures the proper time-ordering and also allows for a direct physical interpretation of the results. We explicitly perform this change of variables at NLO. We observe the emergence of a new class of double logarithmic corrections, potentially leading to instabilities, which are however less severe, since disfavoured by the typical BK evolution for “dilute-dense” scattering. We propose several prescriptions for resumming these new double-logarithms to all orders and find only little scheme dependence: different prescriptions lead to results which are consistent to each other to the accuracy of interest. We restore full NLO accuracy by completing one of the resummed equations (non-local in rapidity) with the remaining NLO corrections.

Proceedings ArticleDOI
01 Jul 2019
TL;DR: In this paper, the authors describe the creation of the first large-scale, multilingual, expert-based dataset of hate-speech/counter-narrative pairs, which has been built with the effort of more than 100 operators from three different NGOs that applied their training and expertise to the task.
Abstract: Although there is an unprecedented effort to provide adequate responses in terms of laws and policies to hate content on social media platforms, dealing with hatred online is still a tough problem. Tackling hate speech in the standard way of content deletion or user suspension may be charged with censorship and overblocking. One alternate strategy, that has received little attention so far by the research community, is to actually oppose hate content with counter-narratives (i.e. informed textual responses). In this paper, we describe the creation of the first large-scale, multilingual, expert-based dataset of hate-speech/counter-narrative pairs. This dataset has been built with the effort of more than 100 operators from three different NGOs that applied their training and expertise to the task. Together with the collected data we also provide additional annotations about expert demographics, hate and response type, and data augmentation through translation and paraphrasing. Finally, we provide initial experiments to assess the quality of our data.

Journal ArticleDOI
TL;DR: A new approach to prepare gold nanoparticles decorated multiwalled carbon nanotubes (MWCNTs) by using cysteaminium chloride via the formation of a Zwitterionic acide-base bond is reported.
Abstract: Gold nanoparticles (AuNPs) decorated CNTs are promising materials for photocatalytics and biosensors. However, the synthesis of AuNPs chemically linked to the walls of MWCNTs is challenging and toxic products such as thionylchloride (SOCl2) or [1-ethyl-3(dimethyl-amino) propyl] carbodiimide hydrochloride (EDAC) need to be used. This work reports a new approach to prepare gold nanoparticles decorated multiwalled carbon nanotubes (MWCNTs) by using cysteaminium chloride via the formation of a Zwitterionic acide-base bond. The grafting process consists of 3 mains steps: oxidation, thiolation and decoration of AuNPs on the surface of MWCNTs. The completion of each step has been verified out by both spectroscopic (Raman, UV-Vis, FT-IR) and Scanning Electron Miscroscopy (SEM). The chemical bonding states of synthesized products have been proven by X-ray photoelectron spectroscopy (XPS).

Journal ArticleDOI
TL;DR: In this paper, the authors consider the recent partial wave analysis of the eta^(')pi system by the COMPASS Collaboration and provide a robust extraction of a single exotic pi_1 resonant pole, with mass and width 1564 +/- 24 +/- 86 and 492 +/- 54 +/- 102 MeV, which couples to both etapi channels.
Abstract: Mapping states with explicit gluonic degrees of freedom in the light sector is a challenge, and has led to controversies in the past. In particular, the experiments have reported two different hybrid candidates with spin-exotic signature, pi¬_1(1400) and pi_1 (1600), which couple separately to eta pi and eta'pi. This picture is not compatible with recent Lattice QCD estimates for hybrid states, nor with most phenomenological models. We consider the recent partial wave analysis of the eta^(')pi system by the COMPASS Collaboration. We fit the extracted intensities and phases with a coupled-channel amplitude that enforces the unitarity and analyticity of the S matrix. We provide a robust extraction of a single exotic pi_1 resonant pole, with mass and width 1564 +/- 24 +/- 86 and 492 +/- 54 +/- 102 MeV, which couples to both eta^(')pi channels. We find no evidence for a second exotic state. We also provide the resonance parameters of the a_2 (1320) and a_2^' (1700).

Journal ArticleDOI
TL;DR: In this article, a model for the contribution to the Bethe-Salpeter kernel deriving from the non-Abelian anomaly was proposed, which has a visible impact on the entire domain of spacelike momenta.
Abstract: Using a continuum approach to the hadron bound-state problem, we calculate ${\ensuremath{\gamma}}^{*}\ensuremath{\gamma}\ensuremath{\rightarrow}\ensuremath{\eta},{\ensuremath{\eta}}^{\ensuremath{'}}$ transition form factors on the entire domain of spacelike momenta, for comparison with existing experiments and in anticipation of new precision data from next-generation ${e}^{+}{e}^{\ensuremath{-}}$ colliders. One novel feature is a model for the contribution to the Bethe-Salpeter kernel deriving from the non-Abelian anomaly, an element which is crucial for any computation of $\ensuremath{\eta},{\ensuremath{\eta}}^{\ensuremath{'}}$ properties. The study also delivers predictions for the amplitudes that describe the light- and strange-quark distributions within the $\ensuremath{\eta},{\ensuremath{\eta}}^{\ensuremath{'}}$. Our results compare favorably with available data. Important to this at large ${Q}^{2}$ is a sound understanding of QCD evolution, which has a visible impact on the ${\ensuremath{\eta}}^{\ensuremath{'}}$ in particular. Our analysis also provides some insights into the properties of $\ensuremath{\eta},{\ensuremath{\eta}}^{\ensuremath{'}}$ mesons and associated observable manifestations of the non-Abelian anomaly.

Proceedings ArticleDOI
TL;DR: This paper describes the creation of the first large-scale, multilingual, expert-based dataset of hate-speech/counter-narrative pairs, built with the effort of more than 100 operators from three different NGOs that applied their training and expertise to the task.
Abstract: Although there is an unprecedented effort to provide adequate responses in terms of laws and policies to hate content on social media platforms, dealing with hatred online is still a tough problem. Tackling hate speech in the standard way of content deletion or user suspension may be charged with censorship and overblocking. One alternate strategy, that has received little attention so far by the research community, is to actually oppose hate content with counter-narratives (i.e. informed textual responses). In this paper, we describe the creation of the first large-scale, multilingual, expert-based dataset of hate speech/counter-narrative pairs. This dataset has been built with the effort of more than 100 operators from three different NGOs that applied their training and expertise to the task. Together with the collected data we also provide additional annotations about expert demographics, hate and response type, and data augmentation through translation and paraphrasing. Finally, we provide initial experiments to assess the quality of our data.