scispace - formally typeset
Search or ask a question

Showing papers by "Technical University of Dortmund published in 2018"


Journal ArticleDOI
TL;DR: The state of non-invasive brain stimulation research in humans is summarized, some current debates about properties and limitations of these methods are discussed, and recommendations for how these challenges may be addressed are given.
Abstract: In the past three decades, our understanding of brain–behavior relationships has been significantly shaped by research using non-invasive brain stimulation (NIBS) techniques. These methods allow non-invasive and safe modulation of neural processes in the healthy brain, enabling researchers to directly study how experimentally altered neural activity causally affects behavior. This unique property of NIBS methods has, on the one hand, led to groundbreaking findings on the brain basis of various aspects of behavior and has raised interest in possible clinical and practical applications of these methods. On the other hand, it has also triggered increasingly critical debates about the properties and possible limitations of these methods. In this review, we discuss these issues, clarify the challenges associated with the use of currently available NIBS techniques for basic research and practical applications, and provide recommendations for studies using NIBS techniques to establish brain–behavior relationships.

544 citations


Posted Content
TL;DR: In this article, a generalization of GNNs, called $k$-dimensional GNN, was proposed, which can take higher-order graph structures at multiple scales into account.
Abstract: In recent years, graph neural networks (GNNs) have emerged as a powerful neural architecture to learn vector representations of nodes and graphs in a supervised, end-to-end fashion. Up to now, GNNs have only been evaluated empirically---showing promising results. The following work investigates GNNs from a theoretical point of view and relates them to the $1$-dimensional Weisfeiler-Leman graph isomorphism heuristic ($1$-WL). We show that GNNs have the same expressiveness as the $1$-WL in terms of distinguishing non-isomorphic (sub-)graphs. Hence, both algorithms also have the same shortcomings. Based on this, we propose a generalization of GNNs, so-called $k$-dimensional GNNs ($k$-GNNs), which can take higher-order graph structures at multiple scales into account. These higher-order structures play an essential role in the characterization of social networks and molecule graphs. Our experimental evaluation confirms our theoretical findings as well as confirms that higher-order information is useful in the task of graph classification and regression.

489 citations


Journal ArticleDOI
TL;DR: The present experimental status of dark matter theory and experiment is overview, which includes current bounds and recent claims and hints of a possible signal in a wide range of experiments: direct detection in underground laboratories, gamma-ray, cosmic ray, x-rays, neutrino telescopes, and the LHC.
Abstract: We review several current aspects of dark matter theory and experiment. We overview the present experimental status, which includes current bounds and recent claims and hints of a possible signal in a wide range of experiments: direct detection in underground laboratories, gamma-ray, cosmic ray, x-ray, neutrino telescopes, and the LHC. We briefly review several possible particle candidates for a weakly interactive massive particle (WIMP) and dark matter that have recently been considered in the literature. We pay particular attention to the lightest neutralino of supersymmetry as it remains the best motivated candidate for dark matter and also shows excellent detection prospects. Finally we briefly review some alternative scenarios that can considerably alter properties and prospects for the detection of dark matter obtained within the standard thermal WIMP paradigm.

454 citations


Proceedings ArticleDOI
01 Jan 2018
TL;DR: This work presents Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes, that is a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights.
Abstract: We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions. As a result, we obtain a generalization of the traditional CNN convolution operator by using continuous kernel functions parametrized by a fixed number of trainable weights. In contrast to related approaches that filter in the spectral domain, the proposed method aggregates features purely in the spatial domain. In addition, SplineCNN allows entire end-to-end training of deep architectures, using only the geometric structure as input, instead of handcrafted feature descriptors. For validation, we apply our method on tasks from the fields of image graph classification, shape correspondence and graph node classification, and show that it outperforms or pars state-of-the-art approaches while being significantly faster and having favorable properties like domain-independence. Our source code is available on GitHub1.

388 citations


Journal ArticleDOI
Morad Aaboud1, Georges Aad2, Brad Abbott3, Ovsat Abdinov4  +2954 moreInstitutions (225)
TL;DR: In this paper, a search for new phenomena in final states with an energetic jet and large missing transverse momentum is reported, and the results are translated into exclusion limits in models with pair-produced weakly interacting dark-matter candidates, large extra spatial dimensions, and supersymmetric particles in several compressed scenarios.
Abstract: Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses proton-proton collision data corresponding to an integrated luminosity of 36.1 fb−1 at a centre-of-mass energy of 13 TeV collected in 2015 and 2016 with the ATLAS detector at the Large Hadron Collider. Events are required to have at least one jet with a transverse momentum above 250 GeV and no leptons (e or μ). Several signal regions are considered with increasing requirements on the missing transverse momentum above 250 GeV. Good agreement is observed between the number of events in data and Standard Model predictions. The results are translated into exclusion limits in models with pair-produced weakly interacting dark-matter candidates, large extra spatial dimensions, and supersymmetric particles in several compressed scenarios.

358 citations


Journal ArticleDOI
Brad Abbott1, Allan G Clark2, S. Latorre, O. Crespo-Lopez3  +397 moreInstitutions (51)
TL;DR: The motivation for this new pixel layer, the Insertable B-Layer (IBL), was to maintain or improve the robustness and performance of the ATLAS tracking system, given the higher instantaneous and integrated luminosities realised following the shutdown.
Abstract: During the shutdown of the CERN Large Hadron Collider in 2013-2014, an additional pixel layer was installed between the existing Pixel detector of the ATLAS experiment and a new, smaller radius beam pipe. The motivation for this new pixel layer, the Insertable B-Layer (IBL), was to maintain or improve the robustness and performance of the ATLAS tracking system, given the higher instantaneous and integrated luminosities realised following the shutdown. Because of the extreme radiation and collision rate environment, several new radiation-tolerant sensor and electronic technologies were utilised for this layer. This paper reports on the IBL construction and integration prior to its operation in the ATLAS detector.

325 citations


Journal ArticleDOI
Roel Aaij1, Gregory Ciezarek, P. Collins1, Stefan Roiser1  +820 moreInstitutions (51)
TL;DR: In this paper, the τ-lepton decays with three charged pions in the final state were measured using a data sample of proton-proton collisions collected with the LHCb detector at center-of-mass energies of 7 and 8 TeV.
Abstract: The ratio of branching fractions R(D^{*-})≡B(B^{0}→D^{*-}τ^{+}ν_{τ})/B(B^{0}→D^{*-}μ^{+}ν_{μ}) is measured using a data sample of proton-proton collisions collected with the LHCb detector at center-of-mass energies of 7 and 8 TeV, corresponding to an integrated luminosity of 3 fb^{-1}. For the first time, R(D^{*-}) is determined using the τ-lepton decays with three charged pions in the final state. The B^{0}→D^{*-}τ^{+}ν_{τ} yield is normalized to that of the B^{0}→D^{*-}π^{+}π^{-}π^{+} mode, providing a measurement of B(B^{0}→D^{*-}τ^{+}ν_{τ})/B(B^{0}→D^{*-}π^{+}π^{-}π^{+})=1.97±0.13±0.18, where the first uncertainty is statistical and the second systematic. The value of B(B^{0}→D^{*-}τ^{+}ν_{τ})=(1.42±0.094±0.129±0.054)% is obtained, where the third uncertainty is due to the limited knowledge of the branching fraction of the normalization mode. Using the well-measured branching fraction of the B^{0}→D^{*-}μ^{+}ν_{μ} decay, a value of R(D^{*-})=0.291±0.019±0.026±0.013 is established, where the third uncertainty is due to the limited knowledge of the branching fractions of the normalization and B^{0}→D^{*-}μ^{+}ν_{μ} modes. This measurement is in agreement with the standard model prediction and with previous results.

242 citations


Journal ArticleDOI
TL;DR: This work reviews the state-of-the-art of designing and evaluating news recommender systems over the last ten years and analyzes which particular challenges of news recommendation have been well explored and which areas still require more work.
Abstract: More and more people read the news online, e.g., by visiting the websites of their favorite newspapers or by navigating the sites of news aggregators. However, the abundance of news information that is published online every day through different channels can make it challenging for readers to locate the content they are interested in. The goal of News Recommender Systems (NRS) is to make reading suggestions to users in a personalized way. Due to their practical relevance, a variety of technical approaches to build such systems have been proposed over the last two decades. In this work, we review the state-of-the-art of designing and evaluating news recommender systems over the last ten years. One main goal of the work is to analyze which particular challenges of news recommendation (e.g., short item life times and recency aspects) have been well explored and which areas still require more work. Furthermore, in contrast to previous surveys, the paper specifically discusses methodological questions and today’s academic practice of evaluating and comparing different algorithmic news recommendation approaches based on accuracy measures.

225 citations


Journal ArticleDOI
Roel Aaij1, Bernardo Adeva2, Marco Adinolfi3, Ziad Ajaltouni4  +805 moreInstitutions (52)
TL;DR: The search for long-lived dark photons is the first to achieve sensitivity using a displaced-vertex signature and the constraints placed on promptlike dark photons are the most stringent to date for the mass range 10.6
Abstract: Searches are performed for both promptlike and long-lived dark photons, A^{'}, produced in proton-proton collisions at a center-of-mass energy of 13 TeV, using A^{'}→μ^{+}μ^{-} decays and a data sample corresponding to an integrated luminosity of 1.6 fb^{-1} collected with the LHCb detector. The promptlike A^{'} search covers the mass range from near the dimuon threshold up to 70 GeV, while the long-lived A^{'} search is restricted to the low-mass region 214

214 citations


Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Ovsat Abdinov3  +2878 moreInstitutions (197)
TL;DR: The performance of the missing transverse momentum reconstruction with the ATLAS detector is evaluated using data collected in proton–proton collisions at the LHC at a centre-of-mass energy of 13 TeV in 2015.
Abstract: The performance of the missing transverse momentum ( ETmiss ) reconstruction with the ATLAS detector is evaluated using data collected in proton-proton collisions at the LHC at a centre-of-mass energy of 13 TeV in 2015. To reconstruct ETmiss , fully calibrated electrons, muons, photons, hadronically decaying τ-leptons , and jets reconstructed from calorimeter energy deposits and charged-particle tracks are used. These are combined with the soft hadronic activity measured by reconstructed charged-particle tracks not associated with the hard objects. Possible double counting of contributions from reconstructed charged-particle tracks from the inner detector, energy deposits in the calorimeter, and reconstructed muons from the muon spectrometer is avoided by applying a signal ambiguity resolution procedure which rejects already used signals when combining the various ETmiss contributions. The individual terms as well as the overall reconstructed ETmiss are evaluated with various performance metrics for scale (linearity), resolution, and sensitivity to the data-taking conditions. The method developed to determine the systematic uncertainties of the ETmiss scale and resolution is discussed. Results are shown based on the full 2015 data sample corresponding to an integrated luminosity of 3.2fb-1 .

208 citations


Journal ArticleDOI
TL;DR: If limitations can be overcome in the future, chemists will be able to design multifunctional systems of similar activity and complexity as nature’s enzymes from simple and easily accessible synthetic building blocks.
Abstract: ConspectusPorous nanostructures and materials based on metal-mediated self-assembly have developed into a vibrantly studied subdiscipline of supramolecular chemistry during the past decades. In principle, two branches of such coordination compounds can be distinguished: Metal–organic frameworks (MOFs) on the one side represent infinite porous networks of metals or metal clusters that are connected via organic ligands to give solid-state materials. On the other hand, metal–organic cages (MOCs) are discrete and soluble systems with only a limited number of pores. Formation of a particular structure type is achieved by carefully balancing the donor site angles within the ligands as well as the nature and coordination geometry of the metal component. Years of research on MOFs and MOCs has yielded numerous types of well-defined porous crystals and complex supramolecular architectures. Since various synthetic routes and postsynthetic modification methods have been established, the focus of recent developments h...

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Ovsat Abdinov3  +2981 moreInstitutions (220)
TL;DR: In this article, a search was performed for resonant and non-resonant Higgs boson pair production in the $ \upgamma \ upgamma b\overline{b} $ final state.
Abstract: A search is performed for resonant and non-resonant Higgs boson pair production in the $ \upgamma \upgamma b\overline{b} $ final state. The data set used corresponds to an integrated luminosity of 36.1 fb$^{−1}$ of proton-proton collisions at a centre-of-mass energy of 13 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. No significant excess relative to the Standard Model expectation is observed. The observed limit on the non-resonant Higgs boson pair cross-section is 0.73 pb at 95% confidence level. This observed limit is equivalent to 22 times the predicted Standard Model cross-section. The Higgs boson self-coupling (κ$_{λ}$ = λ$_{HHH}$/λ$_{HHH}^{SM}$ ) is constrained at 95% confidence level to −8.2 < κ$_{λ}$ < 13.2. For resonant Higgs boson pair production through $ X\to HH\to \upgamma \upgamma b\overline{b} $ , the limit is presented, using the narrow-width approximation, as a function of m$_{X}$ in the range 260 GeV < m$_{X}$ < 1000 GeV. The observed limits range from 1.1 pb to 0.12 pb over this mass range.

Journal ArticleDOI
TL;DR: In this paper, the characterization of eutectic solvents composed of the terpenes thymol or l(−)-menthol and monocarboxylic acids is studied aiming the design of these solvements.
Abstract: Recently, some works claim that hydrophobic deep eutectic solvents could be prepared based on menthol and monocarboxylic acids. Despite of some promising potential applications, these systems were poorly understood, and this work addresses this issue. Here, the characterization of eutectic solvents composed of the terpenes thymol or l(−)-menthol and monocarboxylic acids is studied aiming the design of these solvents. Their solid–liquid phase diagrams were measured by differential scanning calorimetry in the whole composition range, showing that a broader composition range, and not only fixed stoichiometric proportions, can be used as solvents at low temperatures. Additionally, solvent densities and viscosities close to the eutectic compositions were measured, showing low viscosity and lower density than water. The solvatochromic parameters at the eutectic composition were also investigated aiming at better understanding their polarity. The high acidity is mainly provided by the presence of thymol in the m...

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Jalal Abdallah3  +2829 moreInstitutions (197)
TL;DR: In this paper, the mass of the $W$ boson was measured based on proton-proton collision data recorded in 2011 at a centre-of-mass energy of 7 TeV with the ATLAS detector at the LHC.
Abstract: A measurement of the mass of the $W$ boson is presented based on proton-proton collision data recorded in 2011 at a centre-of-mass energy of 7 TeV with the ATLAS detector at the LHC, and corresponding to 4.6 fb$^{-1}$ of integrated luminosity. The selected data sample consists of $7.8 \times 10^6$ candidates in the $W\rightarrow \mu u$ channel and $5.9 \times 10^6$ candidates in the $W\rightarrow e u$ channel. The $W$-boson mass is obtained from template fits to the reconstructed distributions of the charged lepton transverse momentum and of the $W$ boson transverse mass in the electron and muon decay channels, yielding \begin{eqnarray} m_W &=& 80370 \pm 7 \, (\textrm{stat.}) \pm 11 \, (\textrm{exp. syst.}) \pm 14 \, (\textrm{mod. syst.}) \, \textrm{MeV} &=& 80370 \pm 19 \, \textrm{MeV}, \end{eqnarray} where the first uncertainty is statistical, the second corresponds to the experimental systematic uncertainty, and the third to the physics-modelling systematic uncertainty. A measurement of the mass difference between the $W^+$ and $W^-$ bosons yields $m_{W^+}-m_{W^-} = -29 \pm 28$ MeV.

Journal ArticleDOI
Roel Aaij1, Bernardo Adeva2, Marco Adinolfi3, Ziad Ajaltouni4  +806 moreInstitutions (52)
TL;DR: In this paper, a measurement of the ratio of branching fractions was reported for the LHCb collision data at center-of-mass energies of 7 and 8 TeV, with a significance of 3 standard deviations corrected for systematic uncertainty.
Abstract: A measurement is reported of the ratio of branching fractions R(J/ψ)=B(B_{c}^{+}→J/ψτ^{+}ν_{τ})/B(B_{c}^{+}→J/ψμ^{+}ν_{μ}), where the τ^{+} lepton is identified in the decay mode τ^{+}→μ^{+}ν_{μ}ν[over ¯]_{τ}. This analysis uses a sample of proton-proton collision data corresponding to 3.0 fb^{-1} of integrated luminosity recorded with the LHCb experiment at center-of-mass energies of 7 and 8 TeV. A signal is found for the decay B_{c}^{+}→J/ψτ^{+}ν_{τ} at a significance of 3 standard deviations corrected for systematic uncertainty, and the ratio of the branching fractions is measured to be R(J/ψ)=0.71±0.17(stat)±0.18(syst). This result lies within 2 standard deviations above the range of central values currently predicted by the standard model.

Journal ArticleDOI
07 Dec 2018-Science
TL;DR: This work sequenced more than 400 pretreatment neuroblastomas and identified molecular features that characterize the three distinct clinical outcomes, and proposed a mechanistic classification of neuroblastoma that may benefit the clinical management of patients.
Abstract: Neuroblastoma is a pediatric tumor of the sympathetic nervous system. Its clinical course ranges from spontaneous tumor regression to fatal progression. To investigate the molecular features of the divergent tumor subtypes, we performed genome sequencing on 416 pretreatment neuroblastomas and assessed telomere maintenance mechanisms in 208 of these tumors. We found that patients whose tumors lacked telomere maintenance mechanisms had an excellent prognosis, whereas the prognosis of patients whose tumors harbored telomere maintenance mechanisms was substantially worse. Survival rates were lowest for neuroblastoma patients whose tumors harbored telomere maintenance mechanisms in combination with RAS and/or p53 pathway mutations. Spontaneous tumor regression occurred both in the presence and absence of these mutations in patients with telomere maintenance-negative tumors. On the basis of these data, we propose a mechanistic classification of neuroblastoma that may benefit the clinical management of patients.

Journal ArticleDOI
TL;DR: It is demonstrated that electron crystallography complements X‐ray crystallography and is the technique of choice for all unsolved cases in which submicrometer‐sized crystals were the limiting factor.
Abstract: Chemists of all fields currently publish about 50 000 crystal structures per year, the vast majority of which are X-ray structures. We determined two molecular structures by employing electron rather than X-ray diffraction. For this purpose, an EIGER hybrid pixel detector was fitted to a transmission electron microscope, yielding an electron diffractometer. The structure of a new methylene blue derivative was determined at 0.9 A resolution from a crystal smaller than 1×2 μm2 . Several thousand active pharmaceutical ingredients (APIs) are only available as submicrocrystalline powders. To illustrate the potential of electron crystallography for the pharmaceutical industry, we also determined the structure of an API from its pill. We demonstrate that electron crystallography complements X-ray crystallography and is the technique of choice for all unsolved cases in which submicrometer-sized crystals were the limiting factor.

Journal ArticleDOI
Stefano Ansoldi1, Stefano Ansoldi2, Louis Antonelli3, C. Arcaro4  +150 moreInstitutions (21)
TL;DR: In this article, a neutrino with energy similar to 290 TeV was detected in coincidence with the BL Lac object TXS. 0506+056 during enhanced gamma-ray activity, with chance coincidence being rejected at similar to 3 sigma level.
Abstract: A neutrino with energy similar to 290 TeV, IceCube-170922A, was detected in coincidence with the BL Lac object TXS. 0506+056 during enhanced gamma-ray activity, with chance coincidence being rejected at similar to 3 sigma level. We monitored the object in the very-high-energy (VHE) band with the Major Atmospheric Gamma-ray Imaging Cherenkov (MAGIC) telescopes for similar to 41 hr from 1.3 to 40.4 days after the neutrino detection. Day-timescale variability is clearly resolved. We interpret the quasi-simultaneous neutrino and broadband electromagnetic observations with a novel one-zone lepto-hadronic model, based on interactions of electrons and protons co-accelerated in the jet with external photons originating from a slow-moving plasma sheath surrounding the faster jet spine. We can reproduce the multiwavelength spectra of TXS 0506+056 with neutrino rate and energy compatible with IceCube-170922A, and with plausible values for the jet power of similar to 10(45) - 4 x 10(46) erg s(-1). The steep spectrum observed by MAGIC is concordant with internal gamma gamma absorption above similar to 100 GeV entailed by photohadronic production of a similar to 290 TeV neutrino, corroborating a genuine connection between the multi-messenger signals. In contrast to previous predictions of predominantly hadronic emission from neutrino sources, the gamma-rays can be mostly ascribed to inverse Compton upscattering of external photons by accelerated electrons. The X-ray and VHE bands provide crucial constraints on the emission from both accelerated electrons and protons. We infer that the maximum energy of protons in the jet comoving frame can be in the range similar to 10(14) - 10(18) eV.

Journal ArticleDOI
Morad Aaboud, Alexander Kupco1, Peter Davison2, Samuel Webb3  +2897 moreInstitutions (195)
TL;DR: A search for the electroweak production of charginos, neutralinos and sleptons decaying into final states involving two or three electrons or muons is presented and stringent limits at 95% confidence level are placed on the masses of relevant supersymmetric particles.
Abstract: A search for the electroweak production of charginos, neutralinos and sleptons decaying into final states involving two or three electrons or muons is presented. The analysis is based on 36.1 fb$^{-1}$ of $\sqrt{s}=13$ TeV proton–proton collisions recorded by the ATLAS detector at the Large Hadron Collider. Several scenarios based on simplified models are considered. These include the associated production of the next-to-lightest neutralino and the lightest chargino, followed by their decays into final states with leptons and the lightest neutralino via either sleptons or Standard Model gauge bosons, direct production of chargino pairs, which in turn decay into leptons and the lightest neutralino via intermediate sleptons, and slepton pair production, where each slepton decays directly into the lightest neutralino and a lepton. No significant deviations from the Standard Model expectation are observed and stringent limits at 95% confidence level are placed on the masses of relevant supersymmetric particles in each of these scenarios. For a massless lightest neutralino, masses up to 580 GeV are excluded for the associated production of the next-to-lightest neutralino and the lightest chargino, assuming gauge-boson mediated decays, whereas for slepton-pair production masses up to 500 GeV are excluded assuming three generations of mass-degenerate sleptons.

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Ovsat Abdinov3  +2935 moreInstitutions (198)
TL;DR: Combined 95% confidence-level upper limits are set on the production cross section for a range of vectorlike quark scenarios, significantly improving upon the reach of the individual searches.
Abstract: A combination of the searches for pair-produced vectorlike partners of the top and bottom quarks in various decay channels (T -> Zt/Wb/Ht, B -> Zb/Wt/Hb) is performed using 36.1 fb(-1) of pp ...

Journal ArticleDOI
TL;DR: The phenomenon of shrinking cities was widely discussed across Europe at the beginning of the 21st century as discussed by the authors, and most European countries saw an increasingly ageing population and an internal migration trend.
Abstract: At the beginning of the 21st century, the phenomenon of shrinking cities was widely discussed across Europe. Most European countries saw an increasingly ageing population and an internal migration ...

Journal ArticleDOI
TL;DR: An in-depth performance comparison of a number of session-based recommendation algorithms based on recurrent neural networks, factorized Markov model approaches, as well as simpler methods based, e.g., on nearest neighbor schemes reveals that algorithms of this latter class often perform equally well or significantly better than today’s more complex approaches based on deep neural networks.
Abstract: Recommender systems help users find relevant items of interest, for example on e-commerce or media streaming sites. Most academic research is concerned with approaches that personalize the recommendations according to long-term user profiles. In many real-world applications, however, such long-term profiles often do not exist and recommendations therefore have to be made solely based on the observed behavior of a user during an ongoing session. Given the high practical relevance of the problem, an increased interest in this problem can be observed in recent years, leading to a number of proposals for session-based recommendation algorithms that typically aim to predict the user’s immediate next actions. In this work, we present the results of an in-depth performance comparison of a number of such algorithms, using a variety of datasets and evaluation measures. Our comparison includes the most recent approaches based on recurrent neural networks like gru4rec, factorized Markov model approaches such as fism or fossil, as well as simpler methods based, e.g., on nearest neighbor schemes. Our experiments reveal that algorithms of this latter class, despite their sometimes almost trivial nature, often perform equally well or significantly better than today’s more complex approaches based on deep neural networks. Our results therefore suggest that there is substantial room for improvement regarding the development of more sophisticated session-based recommendation algorithms.

Proceedings ArticleDOI
02 Jul 2018
TL;DR: In this paper, a GAN is used to generate levels for Super Mario Bros using a level from the Video Game Level Corpus, which is further improved by application of the covariance matrix adaptation evolution strategy (CMA-ES).
Abstract: Generative Adversarial Networks (GANs) are a machine learning approach capable of generating novel example outputs across a space of provided training examples. Procedural Content Generation (PCG) of levels for video games could benefit from such models, especially for games where there is a pre-existing corpus of levels to emulate. This paper trains a GAN to generate levels for Super Mario Bros using a level from the Video Game Level Corpus. The approach successfully generates a variety of levels similar to one in the original corpus, but is further improved by application of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Specifically, various fitness functions are used to discover levels within the latent space of the GAN that maximize desired properties. Simple static properties are optimized, such as a given distribution of tile types. Additionally, the champion A* agent from the 2009 Mario AI competition is used to assess whether a level is playable, and how many jumping actions are required to beat it. These fitness functions allow for the discovery of levels that exist within the space of examples designed by experts, and also guide the search towards levels that fulfill one or more specified objectives.

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Ovsat Abdinov3  +2884 moreInstitutions (197)
TL;DR: A search for doubly charged Higgs bosons with pairs of prompt, isolated, highly energetic leptons with the same electric charge is presented, fitting the dilepton mass spectra in several exclusive signal regions.
Abstract: A search for doubly charged Higgs bosons with pairs of prompt, isolated, highly energetic leptons with the same electric charge is presented. The search uses a proton–proton collision data sample at a centre-of-mass energy of 13 TeV corresponding to 36.1 $$\text {fb}^{-1}$$ of integrated luminosity recorded in 2015 and 2016 by the ATLAS detector at the LHC. This analysis focuses on the decays $$H^{\pm \pm }\rightarrow e^{\pm }e^{\pm }$$ , $$H^{\pm \pm }\rightarrow e^{\pm }\mu ^{\pm }$$ and $$H^{\pm \pm }\rightarrow \mu ^{\pm }\mu ^{\pm }$$ , fitting the dilepton mass spectra in several exclusive signal regions. No significant evidence of a signal is observed and corresponding limits on the production cross-section and consequently a lower limit on $$m(H^{\pm \pm })$$ are derived at 95% confidence level. With $$\ell ^{\pm }\ell ^{\pm }=e^{\pm }e^{\pm }/\mu ^{\pm }\mu ^{\pm }/e^{\pm }\mu ^{\pm }$$ , the observed lower limit on the mass of a doubly charged Higgs boson only coupling to left-handed leptons varies from 770 to 870 GeV (850 GeV expected) for $$B(H^{\pm \pm }\rightarrow \ell ^{\pm }\ell ^{\pm })=100\%$$ and both the expected and observed mass limits are above 450 GeV for $$B(H^{\pm \pm }\rightarrow \ell ^{\pm }\ell ^{\pm })=10\%$$ and any combination of partial branching ratios.

Proceedings ArticleDOI
10 Jul 2018
TL;DR: This work demonstrates how this task can be performed and provides results on a large data set of manually labeled radar reflections, and eliminates the need for clustering algorithms and manually selected features.
Abstract: Semantic segmentation on radar point clouds is a new challenging task in radar data processing. We demonstrate how this task can be performed and provide results on a large data set of manually labeled radar reflections. In contrast to previous approaches where generated feature vectors from clustered reflections were used as an input for a classifier, now the whole radar point cloud is used as an input and class probabilities are obtained for every single reflection. We thereby eliminate the need for clustering algorithms and manually selected features.

Journal ArticleDOI
Morad Aaboud, Georges Aad1, Brad Abbott2, Ovsat Abdinov3  +2983 moreInstitutions (218)
TL;DR: In this paper, an upper bound of 0.0025% and 0.031% for the cross-section of the charged Higgs boson times the branching fraction in the range 4.2-4.5 pb was established for the mass range 90-160 GeV.
Abstract: Charged Higgs bosons produced either in top-quark decays or in association with a top-quark, subsequently decaying via H$^{±}$ → τ$^{±}$ν$_{τ}$, are searched for in 36.1 fb$^{−1}$ of proton-proton collision data at $ \sqrt{s}=13 $ TeV recorded with the ATLAS detector. Depending on whether the top-quark produced together with H$^{±}$ decays hadronically or leptonically, the search targets τ+jets and τ+lepton final states, in both cases with a hadronically decaying τ-lepton. No evidence of a charged Higgs boson is found. For the mass range of $ {m}_{H^{\pm }} $ = 90–2000 GeV, upper limits at the 95% confidence level are set on the production cross-section of the charged Higgs boson times the branching fraction $ \mathrm{\mathcal{B}}\left({H}^{\pm}\to {\tau}^{\pm }{ u}_{\tau}\right) $ in the range 4.2–0.0025 pb. In the mass range 90–160 GeV, assuming the Standard Model cross-section for $ t\overline{t} $ production, this corresponds to upper limits between 0.25% and 0.031% for the branching fraction $ \mathrm{\mathcal{B}}\left(t\to b{H}^{\pm}\right)\times \mathrm{\mathcal{B}}\left({H}^{\pm}\to {\tau}^{\pm }{ u}_{\tau}\right) $ .

Journal ArticleDOI
TL;DR: In this article, an adaptive droop control (ADC) strategy for the voltage source converter (VSC) based multiterminal high voltage direct current system, which enables the VSC-station to provide frequency regulation for onshore ac grids was proposed.
Abstract: This paper proposes an adaptive droop control (ADC) strategy for the voltage source converter (VSC) based multiterminal high voltage direct current (VSC-MTDC) system, which enables the VSC-station to provide frequency regulation for onshore ac grids. A V-I-f characteristic is derived to establish the relationship between the frequency deviation and dc voltage. A VSC-station working with the V-I-f characteristic is able to adjust its dc voltage reference autonomously according to the grid frequency deviation. Thus, the power flow of the VSC-MTDC system will be redistributed and more power is going to be injected into the adjacent grid of which the frequency deviation exceeds the threshold value. The ADC strategy inherits the advantage of the traditional droop control (TDC) that there is no need for communication system. An ac/dc hybrid system that includes VSC-MTDC system, onshore grids, and offshore wind farms is built in DIgSILENT/Powerfactory. Modal analysis and nonlinear simulations are performed to obtain the optimal control parameters and demonstrate the performance of the ADC strategy.

Journal ArticleDOI
Morad Aaboud, Alexander Kupco1, Peter Davison2, Samuel Webb3  +2937 moreInstitutions (223)
TL;DR: In this paper, the authors presented a search for direct electroweak gaugino or gluino pair production with a chargino nearly mass-degenerate with a stable neutralino.
Abstract: This paper presents a search for direct electroweak gaugino or gluino pair production with a chargino nearly mass-degenerate with a stable neutralino. It is based on an integrated luminosity of 36.1 fb$^{−1}$ of pp collisions at $ \sqrt{s}=13 $ TeV collected by the ATLAS experiment at the LHC. The final state of interest is a disappearing track accompanied by at least one jet with high transverse momentum from initial-state radiation or by four jets from the gluino decay chain. The use of short track segments reconstructed from the innermost tracking layers significantly improves the sensitivity to short chargino lifetimes. The results are found to be consistent with Standard Model predictions. Exclusion limits are set at 95% confidence level on the mass of charginos and gluinos for different chargino lifetimes. For a pure wino with a lifetime of about 0.2 ns, chargino masses up to 460 GeV are excluded. For the strong production channel, gluino masses up to 1.65 TeV are excluded assuming a chargino mass of 460 GeV and lifetime of 0.2 ns.

Posted Content
TL;DR: This paper trains a GAN to generate levels for Super Mario Bros using a level from the Video Game Level Corpus, and uses the champion A* agent from the 2009 Mario AI competition to assess whether a level is playable, and how many jumping actions are required to beat it.
Abstract: Generative Adversarial Networks (GANs) are a machine learning approach capable of generating novel example outputs across a space of provided training examples. Procedural Content Generation (PCG) of levels for video games could benefit from such models, especially for games where there is a pre-existing corpus of levels to emulate. This paper trains a GAN to generate levels for Super Mario Bros using a level from the Video Game Level Corpus. The approach successfully generates a variety of levels similar to one in the original corpus, but is further improved by application of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). Specifically, various fitness functions are used to discover levels within the latent space of the GAN that maximize desired properties. Simple static properties are optimized, such as a given distribution of tile types. Additionally, the champion A* agent from the 2009 Mario AI competition is used to assess whether a level is playable, and how many jumping actions are required to beat it. These fitness functions allow for the discovery of levels that exist within the space of examples designed by experts, and also guide the search towards levels that fulfill one or more specified objectives.

Journal ArticleDOI
M. G. Aartsen1, Markus Ackermann, Jenni Adams2, Juanan Aguilar3  +317 moreInstitutions (46)
TL;DR: A measurement of the atmospheric neutrino oscillation parameters is presented using three years of data from the IceCube Neutrino Observatory, consistent with, and of similar precision to, those from accelerator- and reactor-based experiments.
Abstract: We present a measurement of the atmospheric neutrino oscillation parameters using three years of data from the IceCube Neutrino Observatory. The DeepCore infill array in the center of IceCube enabl ...