scispace - formally typeset
Search or ask a question

Showing papers by "Technical University of Dortmund published in 2020"


Posted Content
TL;DR: The OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains, ranging from social and information networks to biological networks, molecular graphs, source code ASTs, and knowledge graphs, indicating fruitful opportunities for future research.
Abstract: We present the Open Graph Benchmark (OGB), a diverse set of challenging and realistic benchmark datasets to facilitate scalable, robust, and reproducible graph machine learning (ML) research. OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains, ranging from social and information networks to biological networks, molecular graphs, source code ASTs, and knowledge graphs. For each dataset, we provide a unified evaluation protocol using meaningful application-specific data splits and evaluation metrics. In addition to building the datasets, we also perform extensive benchmark experiments for each dataset. Our experiments suggest that OGB datasets present significant challenges of scalability to large-scale graphs and out-of-distribution generalization under realistic data splits, indicating fruitful opportunities for future research. Finally, OGB provides an automated end-to-end graph ML pipeline that simplifies and standardizes the process of graph data loading, experimental setup, and model evaluation. OGB will be regularly updated and welcomes inputs from the community. OGB datasets as well as data loaders, evaluation scripts, baseline code, and leaderboards are publicly available at this https URL .

1,097 citations


Journal ArticleDOI
TL;DR: There is no group of filter methods that always outperforms all other methods, but recommendations onfilter methods that perform well on many of the data sets are made and groups of filters that are similar with respect to the order in which they rank the features are found.

338 citations


Journal ArticleDOI
TL;DR: This Fundamentals article first synthesize research on digital platforms and digital platform ecosystems to provide a definition that integrates both concepts, and uses this definition to explain how differentdigital platform ecosystems vary according to three core building blocks.
Abstract: Digital platforms are an omnipresent phenomenon that challenges incumbents by changing how we consume and provide digital products and services. Whereas traditional firms create value within the boundaries of a company or a supply chain, digital platforms utilize an ecosystem of autonomous agents to co-create value. Scholars from various disciplines, such as economics, technology management, and information systems have taken different perspectives on digital platform ecosystems. In this Fundamentals article, we first synthesize research on digital platforms and digital platform ecosystems to provide a definition that integrates both concepts. Second, we use this definition to explain how different digital platform ecosystems vary according to three core building blocks: (1) platform ownership, (2) value-creating mechanisms, and (3) complementor autonomy. We conclude by giving an outlook on four overarching research areas that connect the building blocks: (1) technical properties and value creation; (2) complementor interaction with the ecosystem; (3) value capture; and (4) the make-or-join decision in digital platform ecosystems.

304 citations


Journal ArticleDOI
TL;DR: This consensus document provides a perspective on what constitutes best practice in pharmacological research on bioactive preparations derived from natural sources, providing a perspective of what the leading specialist journals in the field consider as the core characteristics of good research.

297 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, Ovsat Abdinov4  +2934 moreInstitutions (199)
TL;DR: In this article, a search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented, based on 139.fb$^{-1}$ of proton-proton collisions recorded by the ATLAS detector at the Large Hadron Collider at
Abstract: A search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented. The analysis is based on 139 fb$^{-1}$ of proton–proton collisions recorded by the ATLAS detector at the Large Hadron Collider at $\sqrt{s}=13$ $\text {TeV}$. Three R-parity-conserving scenarios where the lightest neutralino is the lightest supersymmetric particle are considered: the production of chargino pairs with decays via either W bosons or sleptons, and the direct production of slepton pairs. The analysis is optimised for the first of these scenarios, but the results are also interpreted in the others. No significant deviations from the Standard Model expectations are observed and limits at 95% confidence level are set on the masses of relevant supersymmetric particles in each of the scenarios. For a massless lightest neutralino, masses up to 420 $\text {Ge}\text {V}$ are excluded for the production of the lightest-chargino pairs assuming W-boson-mediated decays and up to 1 $\text {TeV}$ for slepton-mediated decays, whereas for slepton-pair production masses up to 700 $\text {Ge}\text {V}$ are excluded assuming three generations of mass-degenerate sleptons.

272 citations


Journal ArticleDOI
M. G. Aartsen1, Markus Ackermann, Jenni Adams1, Juanan Aguilar2  +361 moreInstitutions (48)
TL;DR: The results, all based on searches for a cumulative neutrino signal integrated over the 10 years of available data, motivate further study of these and similar sources, including time-dependent analyses, multimessenger correlations, and the possibility of stronger evidence with coming upgrades to the detector.
Abstract: This Letter presents the results from pointlike neutrino source searches using ten years of IceCube data collected between April 6, 2008 and July 10, 2018. We evaluate the significance of an astrophysical signal from a pointlike source looking for an excess of clustered neutrino events with energies typically above ∼1 TeV among the background of atmospheric muons and neutrinos. We perform a full-sky scan, a search within a selected source catalog, a catalog population study, and three stacked Galactic catalog searches. The most significant point in the northern hemisphere from scanning the sky is coincident with the Seyfert II galaxy NGC 1068, which was included in the source catalog search. The excess at the coordinates of NGC 1068 is inconsistent with background expectations at the level of 2.9σ after accounting for statistical trials from the entire catalog. The combination of this result along with excesses observed at the coordinates of three other sources, including TXS 0506+056, suggests that, collectively, correlations with sources in the northern catalog are inconsistent with background at 3.3σ significance. The southern catalog is consistent with background. These results, all based on searches for a cumulative neutrino signal integrated over the 10 years of available data, motivate further study of these and similar sources, including time-dependent analyses, multimessenger correlations, and the possibility of stronger evidence with coming upgrades to the detector.

222 citations


Journal ArticleDOI
TL;DR: In this article, the authors report the generation of isolated soft X-ray attosecond pulses with an Xray free-electron laser, which has a pulse energy that is millions of times larger than any other source with a peak power exceeding 100 GW, with a unique combination of high intensity, high photon energy and short pulse duration.
Abstract: The quantum-mechanical motion of electrons in molecules and solids occurs on the sub-femtosecond timescale. Consequently, the study of ultrafast electronic phenomena requires the generation of laser pulses shorter than 1 fs and of sufficient intensity to interact with their target with high probability. Probing these dynamics with atomic-site specificity requires the extension of sub-femtosecond pulses to the soft X-ray spectral region. Here, we report the generation of isolated soft X-ray attosecond pulses with an X-ray free-electron laser. Our source has a pulse energy that is millions of times larger than any other source of isolated attosecond pulses in the soft X-ray spectral region, with a peak power exceeding 100 GW. This unique combination of high intensity, high photon energy and short pulse duration enables the investigation of electron dynamics with X-ray nonlinear spectroscopy and single-particle imaging, unlocking a path towards a new era of attosecond science. The generation of ultrashort X-ray pulses with a peak power exceeding 100 GW offers new opportunities for studying electron dynamics with nonlinear spectroscopy and single-particle imaging.

220 citations


Journal ArticleDOI
TL;DR: In this paper, the authors identify some of the main political-economic factors behind car dependence, drawing together research from several fields, including automotive industry, provision of car infrastructure, political economy of urban sprawl, and provision of public transport.
Abstract: Research on car dependence exposes the difficulty of moving away from a car-dominated, high-carbon transport system, but neglects the political-economic factors underpinning car-dependent societies. Yet these factors are key constraints to attempts to ‘decouple' human well-being from energy use and climate change emissions. In this critical review paper, we identify some of the main political-economic factors behind car dependence, drawing together research from several fields. Five key constituent elements of what we call the ‘car-dependent transport system’ are identified: i) the automotive industry; ii) the provision of car infrastructure; iii) the political economy of urban sprawl; iv) the provision of public transport; v) cultures of car consumption. Using the ‘systems of provision’ approach within political economy, we locate the part played by each element within the key dynamic processes of the system as a whole. Such processes encompass industrial structure, political-economic relations, the built environment, and cultural feedback loops. We argue that linkages between these processes are crucial to maintaining car dependence and thus create carbon lock-in. In developing our argument we discuss several important characteristics of car-dependent transport systems: the role of integrated socio-technical aspects of provision, the opportunistic use of contradictory economic arguments serving industrial agendas, the creation of an apolitical facade around pro-car decision-making, and the ‘capture’ of the state within the car-dependent transport system. Through uncovering the constituents, processes and characteristics of car-dependent transport systems, we show that moving past the automobile age will require an overt and historically aware political program of research and action.

203 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2954 moreInstitutions (198)
TL;DR: In this paper, the trigger algorithms and selection were optimized to control the rates while retaining a high efficiency for physics analyses at the ATLAS experiment to cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), and a similar increase in the number of interactions per beam-crossing to about 60.
Abstract: Electron and photon triggers covering transverse energies from 5 GeV to several TeV are essential for the ATLAS experiment to record signals for a wide variety of physics: from Standard Model processes to searches for new phenomena in both proton–proton and heavy-ion collisions. To cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), to 2.1×1034cm-2s-1, and a similar increase in the number of interactions per beam-crossing to about 60, trigger algorithms and selections were optimised to control the rates while retaining a high efficiency for physics analyses. For proton–proton collisions, the single-electron trigger efficiency relative to a single-electron offline selection is at least 75% for an offline electron of 31 GeV, and rises to 96% at 60 GeV; the trigger efficiency of a 25 GeV leg of the primary diphoton trigger relative to a tight offline photon selection is more than 96% for an offline photon of 30 GeV. For heavy-ion collisions, the primary electron and photon trigger efficiencies relative to the corresponding standard offline selections are at least 84% and 95%, respectively, at 5 GeV above the corresponding trigger threshold.

180 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2962 moreInstitutions (199)
TL;DR: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13‬TeV recorded with the ATLAS detector.
Abstract: A search for heavy neutral Higgs bosons is performed using the LHC Run 2 data, corresponding to an integrated luminosity of 139 fb^{-1} of proton-proton collisions at sqrt[s]=13 TeV recorded with the ATLAS detector. The search for heavy resonances is performed over the mass range 0.2-2.5 TeV for the τ^{+}τ^{-} decay with at least one τ-lepton decaying into final states with hadrons. The data are in good agreement with the background prediction of the standard model. In the M_{h}^{125} scenario of the minimal supersymmetric standard model, values of tanβ>8 and tanβ>21 are excluded at the 95% confidence level for neutral Higgs boson masses of 1.0 and 1.5 TeV, respectively, where tanβ is the ratio of the vacuum expectation values of the two Higgs doublets.

178 citations


Book ChapterDOI
23 Aug 2020
TL;DR: Deep Local Shapes (DeepLS) as discussed by the authors replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network.
Abstract: Efficiently reconstructing complex and intricate surfaces at scale is a long-standing goal in machine perception. To address this problem we introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements. DeepLS replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network, inspired by recent work such as DeepSDF. Unlike DeepSDF, which represents an object-level SDF with a neural network and a single latent code, we store a grid of independent latent codes, each responsible for storing information about surfaces in a small local neighborhood. This decomposition of scenes into local shapes simplifies the prior distribution that the network must learn, and also enables efficient inference. We demonstrate the effectiveness and generalization power of DeepLS by showing object shape encoding and reconstructions of full scenes, where DeepLS delivers high compression, accuracy, and local shape completion.

Posted Content
TL;DR: This work introduces Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements, and demonstrates the effectiveness and generalization power of this representation.
Abstract: Efficiently reconstructing complex and intricate surfaces at scale is a long-standing goal in machine perception. To address this problem we introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements. DeepLS replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network, inspired by recent work such as DeepSDF. Unlike DeepSDF, which represents an object-level SDF with a neural network and a single latent code, we store a grid of independent latent codes, each responsible for storing information about surfaces in a small local neighborhood. This decomposition of scenes into local shapes simplifies the prior distribution that the network must learn, and also enables efficient inference. We demonstrate the effectiveness and generalization power of DeepLS by showing object shape encoding and reconstructions of full scenes, where DeepLS delivers high compression, accuracy, and local shape completion.

Journal ArticleDOI
M. G. Aartsen1, Markus Ackermann, Jenni Adams1, Juanan Aguilar2  +355 moreInstitutions (48)
TL;DR: This analysis provides the most detailed characterization of the neutrino flux at energies below ∼100 TeV compared to previous IceCube results, and suggests the existence of astrophysical neutrinos sources characterized by dense environments which are opaque to gamma rays.
Abstract: We report on the first measurement of the astrophysical neutrino flux using particle showers (cascades) in IceCube data from 2010-2015. Assuming standard oscillations, the astrophysical neutrinos in this dedicated cascade sample are dominated (∼90%) by electron and tau flavors. The flux, observed in the sensitive energy range from 16 TeV to 2.6 PeV, is consistent with a single power-law model as expected from Fermi-type acceleration of high energy particles at astrophysical sources. We find the flux spectral index to be γ=2.53±0.07 and a flux normalization for each neutrino flavor of ϕ_{astro}=1.66_{-0.27}^{+0.25} at E_{0}=100 TeV, in agreement with IceCube's complementary muon neutrino results and with all-neutrino flavor fit results. In the measured energy range we reject spectral indices γ≤2.28 at ≥3σ significance level. Because of high neutrino energy resolution and low atmospheric neutrino backgrounds, this analysis provides the most detailed characterization of the neutrino flux at energies below ∼100 TeV compared to previous IceCube results. Results from fits assuming more complex neutrino flux models suggest a flux softening at high energies and a flux hardening at low energies (p value ≥0.06). The sizable and smooth flux measured below ∼100 TeV remains a puzzle. In order to not violate the isotropic diffuse gamma-ray background as measured by the Fermi Large Area Telescope, it suggests the existence of astrophysical neutrino sources characterized by dense environments which are opaque to gamma rays.

Journal ArticleDOI
01 Jul 2020
TL;DR: In this article, the authors outline the common features of climate delay discourses and provide a guide to identifying them, including negative social effects of climate policies and raising doubt that mitigation is possible.
Abstract: ‘Discourses of climate delay’ pervade current debates on climate action. These discourses accept the existence of climate change, but justify inaction or inadequate efforts. In contemporary discussions on what actions should be taken, by whom and how fast, proponents of climate delay would argue for minimal action or action taken by others. They focus attention on the negative social effects of climate policies and raise doubt that mitigation is possible. Here, we outline the common features of climate delay discourses and provide a guide to identifying them.

Journal ArticleDOI
22 Sep 2020
TL;DR: The data disprove the hypothesis of insufficient SARS-CoV-2-reactive immunity in critical COVID-19 and indicates that activation of differentiated memory effector T cells could cause hyper-reactivity and immunopathogenesis in critical patients.
Abstract: T cell immunity toward SARS-CoV-2 spike (S-), membrane (M-), and nucleocapsid (N-) proteins may define COVID-19 severity. Therefore, we compare the SARS-CoV-2-reactive T cell responses in moderate, severe, and critical COVID-19 patients and unexposed donors. Overlapping peptide pools of all three proteins induce SARS-CoV-2-reactive T cell response with dominance of CD4+ over CD8+ T cells and demonstrate interindividual immunity against the three proteins. M-protein induces the highest frequencies of CD4+ T cells, suggesting its relevance for diagnosis and vaccination. The T cell response of critical COVID-19 patients is robust and comparable or even superior to non-critical patients. Virus clearance and COVID-19 survival are not associated with either SARS-CoV-2 T cell kinetics or magnitude of T cell responses, respectively. Thus, our data do not support the hypothesis of insufficient SARS-CoV-2-reactive immunity in critical COVID-19. Conversely, it indicates that activation of differentiated memory effector T cells could cause hyperreactivity and immunopathogenesis in critical patients.

Journal ArticleDOI
Roel Aaij, C. Abellán Beteta1, Thomas Ackernley2, Bernardo Adeva3  +903 moreInstitutions (58)
TL;DR: In this article, both prompt-like and long-lived dark photons, A^{'}, produced in proton-proton collisions at a center-of-mass energy of 13 TeV, were searched using a data sample corresponding to an integrated luminosity of 5.5
Abstract: Searches are performed for both promptlike and long-lived dark photons, A^{'}, produced in proton-proton collisions at a center-of-mass energy of 13 TeV. These searches look for A^{'}→μ^{+}μ^{-} decays using a data sample corresponding to an integrated luminosity of 5.5 fb^{-1} collected with the LHCb detector. Neither search finds evidence for a signal, and 90% confidence-level exclusion limits are placed on the γ-A^{'} kinetic mixing strength. The promptlike A^{'} search explores the mass region from near the dimuon threshold up to 70 GeV and places the most stringent constraints to date on dark photons with 214

Journal ArticleDOI
Roel Aaij, C. Abellán Beteta1, Thomas Ackernley2, Bernardo Adeva3  +900 moreInstitutions (59)
TL;DR: In this article, an angular analysis of the B^{0}→K^{*0}(→K+}π^{-})μ^{+}μ^{-} decay is presented using a dataset corresponding to an integrated luminosity of 4.7 fb^{-1} of pp collision data collected with the LHCb experiment.
Abstract: An angular analysis of the B^{0}→K^{*0}(→K^{+}π^{-})μ^{+}μ^{-} decay is presented using a dataset corresponding to an integrated luminosity of 4.7 fb^{-1} of pp collision data collected with the LHCb experiment. The full set of CP-averaged observables are determined in bins of the invariant mass squared of the dimuon system. Contamination from decays with the K^{+}π^{-} system in an S-wave configuration is taken into account. The tension seen between the previous LHCb results and the standard model predictions persists with the new data. The precise value of the significance of this tension depends on the choice of theory nuisance parameters.

Journal ArticleDOI
TL;DR: This work systematically review the literature on explanations in advice-giving systems, which includes recommender systems, and derives a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems.
Abstract: With the recent advances in the field of artificial intelligence, an increasing number of decision-making tasks are delegated to software systems. A key requirement for the success and adoption of such systems is that users must trust system choices or even fully automated decisions. To achieve this, explanation facilities have been widely investigated as a means of establishing trust in these systems since the early years of expert systems. With today's increasingly sophisticated machine learning algorithms, new challenges in the context of explanations, accountability, and trust towards such systems constantly arise. In this work, we systematically review the literature on explanations in advice-giving systems. This is a family of systems that includes recommender systems, which is one of the most successful classes of advice-giving software in practice. We investigate the purposes of explanations as well as how they are generated, presented to users, and evaluated. As a result, we derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities for current and future decision support systems. The taxonomy includes a variety of different facets, such as explanation objective, responsiveness, content and presentation. Moreover, we identified several challenges that remain unaddressed so far, for example related to fine-grained issues associated with the presentation of explanations and how explanation facilities are evaluated.

Journal ArticleDOI
TL;DR: This survey gives a comprehensive overview of techniques for kernel-based graph classification developed in the past 15 years and describes and categorizes graph kernels based on properties inherent to their design, such as the nature of their extracted graph features, their method of computation and their applicability to problems in practice.
Abstract: Graph kernels have become an established and widely-used technique for solving classification tasks on graphs. This survey gives a comprehensive overview of techniques for kernel-based graph classification developed in the past 15 years. We describe and categorize graph kernels based on properties inherent to their design, such as the nature of their extracted graph features, their method of computation and their applicability to problems in practice. In an extensive experimental evaluation, we study the classification accuracy of a large suite of graph kernels on established benchmarks as well as new datasets. We compare the performance of popular kernels with several baseline methods and study the effect of applying a Gaussian RBF kernel to the metric induced by a graph kernel. In doing so, we find that simple baselines become competitive after this transformation on some datasets. Moreover, we study the extent to which existing graph kernels agree in their predictions (and prediction errors) and obtain a data-driven categorization of kernels as result. Finally, based on our experimental results, we derive a practitioner’s guide to kernel-based graph classification.

Journal ArticleDOI
TL;DR: To demonstrate the potential applicability of this new class of low energy emitters in future photonic applications, such as non-classical light sources for quantum communication or quantum cryptography, single-molecule photon-correlation experiments of 2 are successfully conducted, showing distinct anti-bunching as required for single photon emitters.
Abstract: A series of copper(I) complexes bearing a cyclic (amino)(aryl)carbene (CAArC) ligand with various complex geometries have been investigated in great detail with regard to their structural, electronic, and photophysical properties. Comparison of [CuX(CAArC)] (X = Br (1), Cbz (2), acac (3), Ph2acac (4), Cp (5), and Cp* (6)) with known CuI complexes bearing cyclic (amino)(alkyl), monoamido, or diamido carbenes (CAAC, MAC, or DAC, respectively) as chromophore ligands reveals that the expanded π-system of the CAArC leads to relatively low energy absorption maxima between 350 and 550 nm in THF with high absorption coefficients of 5-15 × 103 M-1 cm-1 for 1-6. Furthermore, 1-5 show intense deep red to near-IR emission involving their triplet excited states in the solid state and in PMMA films with λemmax = 621-784 nm. Linear [Cu(Cbz)(DippCAArC)] (2) has been found to be an exceptional deep red (λmax = 621 nm, ϕ = 0.32, τav = 366 ns) thermally activated delayed fluorescence (TADF) emitter with a radiative rate constant kr of ca. 9 × 105 s-1, exceeding those of commercially employed IrIII- or PtII-based emitters. Time-resolved transient absorption and fluorescence upconversion experiments complemented by quantum chemical calculations employing Kohn-Sham density functional theory and multireference configuration interaction methods as well as temperature-dependent steady-state and time-resolved luminescence studies provide a detailed picture of the excited-state dynamics of 2. To demonstrate the potential applicability of this new class of low-energy emitters in future photonic applications, such as nonclassical light sources for quantum communication or quantum cryptography, we have successfully conducted single-molecule photon-correlation experiments of 2, showing distinct antibunching as required for single-photon emitters.

Proceedings Article
30 Apr 2020
TL;DR: This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs that scales well to large, real-world inputs while still being able to recover global correspondences consistently.
Abstract: This work presents a two-stage neural architecture for learning and refining structural correspondences between graphs. First, we use localized node embeddings computed by a graph neural network to obtain an initial ranking of soft correspondences between nodes. Secondly, we employ synchronous message passing networks to iteratively re-rank the soft correspondences to reach a matching consensus in local neighborhoods between graphs. We show, theoretically and empirically, that our message passing scheme computes a well-founded measure of consensus for corresponding neighborhoods, which is then used to guide the iterative re-ranking process. Our purely local and sparsity-aware architecture scales well to large, real-world inputs while still being able to recover global correspondences consistently. We demonstrate the practical effectiveness of our method on real-world tasks from the fields of computer vision and entity alignment between knowledge graphs, on which we improve upon the current state-of-the-art.

Journal ArticleDOI
TL;DR: The Random Forest based MErging Procedure (RF-MEP), which combines information from ground-based measurements, state-of-the-art precipitation products, and topography-related features to improve the representation of the spatio-temporal distribution of precipitation, is presented, especially in data-scarce regions.

Posted Content
TL;DR: An overview of the vision of how machine learning will impact the wireless communication systems and the ML methods that have the highest potential to be used in wireless networks are provided.
Abstract: The focus of this white paper is on machine learning (ML) in wireless communications. 6G wireless communication networks will be the backbone of the digital transformation of societies by providing ubiquitous, reliable, and near-instant wireless connectivity for humans and machines. Recent advances in ML research has led enable a wide range of novel technologies such as self-driving vehicles and voice assistants. Such innovation is possible as a result of the availability of advanced ML models, large datasets, and high computational power. On the other hand, the ever-increasing demand for connectivity will require a lot of innovation in 6G wireless networks, and ML tools will play a major role in solving problems in the wireless domain. In this paper, we provide an overview of the vision of how ML will impact the wireless communication systems. We first give an overview of the ML methods that have the highest potential to be used in wireless networks. Then, we discuss the problems that can be solved by using ML in various layers of the network such as the physical layer, medium access layer, and application layer. Zero-touch optimization of wireless networks using ML is another interesting aspect that is discussed in this paper. Finally, at the end of each section, important research questions that the section aims to answer are presented.

Journal ArticleDOI
TL;DR: In this paper, an enlarged scalar sector and Yukawa couplings between leptons and new vectorlike fermions were used to explain deviations from standard model predictions naturally.
Abstract: The measurements of the muon and electron anomalous magnetic moments hint at physics beyond the standard model. We show why and how models inspired by asymptotic safety can explain deviations from standard model predictions naturally. Our setup features an enlarged scalar sector and Yukawa couplings between leptons and new vectorlike fermions. Using the complete two-loop running of couplings, we observe a well-behaved high-energy limit of models including a stabilization of the Higgs. We find that a manifest breaking of lepton universality beyond standard model Yukawas is not necessary to explain the muon and electron anomalies. We further predict the tau anomalous magnetic moment and new particles in the TeV energy range, whose signatures at colliders are indicated. With small $CP$ phases, the electron EDM can be as large as the present bound.

Journal ArticleDOI
TL;DR: A design approach for the preparation of biologically relevant small-molecule libraries is discussed, harnessing the unprecedented combination of NP-derived fragments as an overarching strategy for the synthesis of new bioactive compounds.
Abstract: Natural products (NPs) are a significant source of inspiration towards the discovery of new bioactive compounds based on novel molecular scaffolds. However, there are currently only a small number of guiding synthetic strategies available to generate novel NP-inspired scaffolds, limiting both the number and types of compounds accessible. In this Perspective, we discuss a design approach for the preparation of biologically relevant small-molecule libraries, harnessing the unprecedented combination of NP-derived fragments as an overarching strategy for the synthesis of new bioactive compounds. These novel 'pseudo-natural product' classes retain the biological relevance of NPs, yet exhibit structures and bioactivities not accessible to nature or through the use of existing design strategies. We also analyse selected pseudo-NP libraries using chemoinformatic tools, to assess their molecular shape diversity and properties. To facilitate the exploration of biologically relevant chemical space, we identify design principles and connectivity patterns that would provide access to unprecedented pseudo-NP classes, offering new opportunities for bioactive small-molecule discovery.

Journal ArticleDOI
TL;DR: The authors introduced a set of global high-resolution (0.05°) precipitation (P) climatologies corrected for bias using streamflow (Q) observations from 9372 stations worldwide.
Abstract: We introduce a set of global high-resolution (0.05°) precipitation (P) climatologies corrected for bias using streamflow (Q) observations from 9372 stations worldwide. For each station, we ...

Journal ArticleDOI
TL;DR: Based on the totality of currently available scientific evidence, the present review does not support the presumption that fluoride should be assessed as a human developmental neurotoxicant at the current exposure levels in Europe.
Abstract: Recently, epidemiological studies have suggested that fluoride is a human developmental neurotoxicant that reduces measures of intelligence in children, placing it into the same category as toxic metals (lead, methylmercury, arsenic) and polychlorinated biphenyls. If true, this assessment would be highly relevant considering the widespread fluoridation of drinking water and the worldwide use of fluoride in oral hygiene products such as toothpaste. To gain a deeper understanding of these assertions, we reviewed the levels of human exposure, as well as results from animal experiments, particularly focusing on developmental toxicity, and the molecular mechanisms by which fluoride can cause adverse effects. Moreover, in vitro studies investigating fluoride in neuronal cells and precursor/stem cells were analyzed, and 23 epidemiological studies published since 2012 were considered. The results show that the margin of exposure (MoE) between no observed adverse effect levels (NOAELs) in animal studies and the current adequate intake (AI) of fluoride (50 µg/kg b.w./day) in humans ranges between 50 and 210, depending on the specific animal experiment used as reference. Even for unusually high fluoride exposure levels, an MoE of at least ten was obtained. Furthermore, concentrations of fluoride in human plasma are much lower than fluoride concentrations, causing effects in cell cultures. In contrast, 21 of 23 recent epidemiological studies report an association between high fluoride exposure and reduced intelligence. The discrepancy between experimental and epidemiological evidence may be reconciled with deficiencies inherent in most of these epidemiological studies on a putative association between fluoride and intelligence, especially with respect to adequate consideration of potential confounding factors, e.g., socioeconomic status, residence, breast feeding, low birth weight, maternal intelligence, and exposure to other neurotoxic chemicals. In conclusion, based on the totality of currently available scientific evidence, the present review does not support the presumption that fluoride should be assessed as a human developmental neurotoxicant at the current exposure levels in Europe.

Journal ArticleDOI
TL;DR: Task-specific stimulation protocols can improve EFs in ADHD, with anodal left DLPFC tDCS most clearly affected executive control functions, while cathodalleft DLP FC tDCS improved inhibitory control.
Abstract: Objective: This study examined effects of transcranial direct current stimulation (tDCS) over the dorsolateral prefrontal cortex (DLPFC) and orbitofrontal cortex (OFC) on major executive functions ...

Journal ArticleDOI
TL;DR: A critical synthesis of the evidence is needed due to persisting conceptual and methodological cha... as discussed by the authors, which brings together recent and emerging developments in the field of present-eeism.
Abstract: This position paper brings together recent and emerging developments in the field of presenteeism. A critical synthesis of the evidence is needed due to persisting conceptual and methodological cha...

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2957 moreInstitutions (201)
TL;DR: A search for narrowly resonant new physics using a machine-learning anomaly detection procedure that does not rely on signal simulations for developing the analysis selection and results are complementary to the dedicated searches for the case that B and C are standard model bosons.
Abstract: This Letter describes a search for narrowly resonant new physics using a machine-learning anomaly detection procedure that does not rely on signal simulations for developing the analysis selection. Weakly supervised learning is used to train classifiers directly on data to enhance potential signals. The targeted topology is dijet events and the features used for machine learning are the masses of the two jets. The resulting analysis is essentially a three-dimensional search A→BC, for m_{A}∼O(TeV), m_{B},m_{C}∼O(100 GeV) and B, C are reconstructed as large-radius jets, without paying a penalty associated with a large trials factor in the scan of the masses of the two jets. The full run 2 sqrt[s]=13 TeV pp collision dataset of 139 fb^{-1} recorded by the ATLAS detector at the Large Hadron Collider is used for the search. There is no significant evidence of a localized excess in the dijet invariant mass spectrum between 1.8 and 8.2 TeV. Cross-section limits for narrow-width A, B, and C particles vary with m_{A}, m_{B}, and m_{C}. For example, when m_{A}=3 TeV and m_{B}≳200 GeV, a production cross section between 1 and 5 fb is excluded at 95% confidence level, depending on m_{C}. For certain masses, these limits are up to 10 times more sensitive than those obtained by the inclusive dijet search. These results are complementary to the dedicated searches for the case that B and C are standard model bosons.