scispace - formally typeset
Search or ask a question

Showing papers on "Elementary particle published in 2020"


Journal ArticleDOI
02 Dec 2020-Nature
TL;DR: The value of the fine-structure constant α differs by more than 5 standard deviations from the best available result from caesium recoil measurements, which modifies the constraints on possible candidate dark-matter particles proposed to explain the anomalous decays of excited states of 8Be nuclei and paves the way for testing the discrepancy observed in the magnetic moment anomaly of the muon in the electron sector.
Abstract: The standard model of particle physics is remarkably successful because it is consistent with (almost) all experimental results. However, it fails to explain dark matter, dark energy and the imbalance between matter and antimatter in the Universe. Because discrepancies between standard-model predictions and experimental observations may provide evidence of new physics, an accurate evaluation of these predictions requires highly precise values of the fundamental physical constants. Among them, the fine-structure constant α is of particular importance because it sets the strength of the electromagnetic interaction between light and charged elementary particles, such as the electron and the muon. Here we use matter-wave interferometry to measure the recoil velocity of a rubidium atom that absorbs a photon, and determine the fine-structure constant α−1 = 137.035999206(11) with a relative accuracy of 81 parts per trillion. The accuracy of eleven digits in α leads to an electron g factor1,2—the most precise prediction of the standard model—that has a greatly reduced uncertainty. Our value of the fine-structure constant differs by more than 5 standard deviations from the best available result from caesium recoil measurements3. Our result modifies the constraints on possible candidate dark-matter particles proposed to explain the anomalous decays of excited states of 8Be nuclei4 and paves the way for testing the discrepancy observed in the magnetic moment anomaly of the muon5 in the electron sector6. The fine-structure constant is determined with an accuracy of 81 parts per trillion using matter-wave interferometry to measure the rubidium atom recoil velocity.

342 citations


Journal ArticleDOI
TL;DR: In this paper, the authors identify the origin of this shift as arising from the exponentiation of spin operators for the recently defined minimally coupled three-particle amplitudes of spinning particles coupled to gravity, in the large-spin limit.
Abstract: Long ago, Newman and Janis showed that a complex deformation z → z + ia of the Schwarzschild solution produces the Kerr solution. The underlying explanation for this relationship has remained obscure. The complex deformation has an electromagnetic counterpart: by shifting the Coloumb potential, we obtain the EM field of a certain rotating charge distribution which we term $$ \sqrt{\mathrm{Kerr}} $$ . In this note, we identify the origin of this shift as arising from the exponentiation of spin operators for the recently defined “minimally coupled” three-particle amplitudes of spinning particles coupled to gravity, in the large- spin limit. We demonstrate this by studying the impulse imparted to a test particle in the background of the heavy spinning particle. We first consider the electromagnetic case, where the impulse due to $$ \sqrt{\mathrm{Kerr}} $$ is reproduced by a charged spinning particle; the shift of the Coloumb potential is matched to the exponentiated spin-factor appearing in the amplitude. The known impulse due to the Kerr black hole is then trivially derived from the gravitationally coupled spinning particle via the double copy.

206 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the constrained Hamiltonian dynamics induced by strong Rydberg interactions maps exactly onto the one of a U(1) lattice gauge theory and that the recently observed anomalously slow dynamics corresponds to a string-inversion mechanism, reminiscent of the string breaking typically observed in gauge theories.
Abstract: Gauge theories are the cornerstone of our understanding of fundamental interactions among elementary particles. Their properties are often probed in dynamical experiments, such as those performed at ion colliders and high-intensity laser facilities. Describing the evolution of these strongly coupled systems is a formidable challenge for classical computers and represents one of the key open quests for quantum simulation approaches to particle physics phenomena. In this work, we show how recent experiments done on Rydberg atom chains naturally realize the real-time dynamics of a lattice gauge theory at system sizes at the boundary of classical computational methods. We prove that the constrained Hamiltonian dynamics induced by strong Rydberg interactions maps exactly onto the one of a U(1) lattice gauge theory. Building on this correspondence, we show that the recently observed anomalously slow dynamics corresponds to a string-inversion mechanism, reminiscent of the string breaking typically observed in gauge theories. This underlies the generality of this slow dynamics, which we illustrate in the context of one-dimensional quantum electrodynamics on the lattice. Within the same platform, we propose a set of experiments that generically show long-lived oscillations, including the evolution of particle-antiparticle pairs, and discuss how a tunable topological angle can be realized, further affecting the dynamics following a quench. Our work shows that the state of the art for quantum simulation of lattice gauge theories is at 51 qubits and connects the recently observed slow dynamics in atomic systems to archetypal phenomena in particle physics.

165 citations



Journal ArticleDOI
10 Jan 2020-Science
TL;DR: In this article, the deconfinement of spin and charge excitations in real space after the removal of a particle in Fermi-Hubbard chains of ultracold atoms was studied.
Abstract: Elementary particles carry several quantum numbers, such as charge and spin. However, in an ensemble of strongly interacting particles, the emerging degrees of freedom can fundamentally differ from those of the individual constituents. For example, one-dimensional systems are described by independent quasiparticles carrying either spin (spinon) or charge (holon). Here, we report on the dynamical deconfinement of spin and charge excitations in real space after the removal of a particle in Fermi-Hubbard chains of ultracold atoms. Using space- and time-resolved quantum gas microscopy, we tracked the evolution of the excitations through their signatures in spin and charge correlations. By evaluating multipoint correlators, we quantified the spatial separation of the excitations in the context of fractionalization into single spinons and holons at finite temperatures.

53 citations


Journal ArticleDOI
TL;DR: A variety of observations impose upper limits at the nano Gauss level on magnetic elds that are coherent on inter-galactic scales while blazar observations indicate a lower bound 1016 Gauss.
Abstract: A variety of observations impose upper limits at the nano Gauss level on magnetic fields that are coherent on inter-galactic scales while blazar observations indicate a lower bound $\sim 10^{-16}$ Gauss. Such magnetic fields can play an important astrophysical role, for example at cosmic recombination and during structure formation, and also provide crucial information for particle physics in the early universe. Magnetic fields with significant energy density could have been produced at the electroweak phase transition. The evolution and survival of magnetic fields produced on sub-horizon scales in the early universe, however, depends on the magnetic helicity which is related to violation of symmetries in fundamental particle interactions. The generation of magnetic helicity requires new CP violating interactions that can be tested by accelerator experiments via decay channels of the Higgs particle.

53 citations


Journal ArticleDOI
TL;DR: In this article, a search for elementary particles with charges much smaller than the electron charge using a data sample of proton-proton collisions provided by the CERN Large Hadron Collider in 2018, corresponding to an integrated luminosity of 37.5 fb$^{-1}$ at a center of mass energy of 13 TeV.
Abstract: We report on a search for elementary particles with charges much smaller than the electron charge using a data sample of proton-proton collisions provided by the CERN Large Hadron Collider in 2018, corresponding to an integrated luminosity of 37.5 fb$^{-1}$ at a center-of-mass energy of 13 TeV. A prototype scintillator-based detector is deployed to conduct the first search at a hadron collider sensitive to particles with charges ${\leq}0.1e$. The existence of new particles with masses between 20 and 4700 MeV is excluded at 95% confidence level for charges between $0.006e$ and $0.3e$, depending on their mass. New sensitivity is achieved for masses larger than $700$ MeV.

50 citations


Posted Content
01 Jul 2020-viXra
TL;DR: In Quantum FFF Theory, Fermions are supposed to be small rigid transformer strings with a propeller shape, able to become compound particles to form Quarks as discussed by the authors, and different elementary particles have different qualities by their different complex stringy shape.
Abstract: In Quantum FFF Theory, Fermions are supposed to be small rigid transformer strings with a propeller shape, able to become compound particles to form Quarks. Different elementary particles have different qualities by their different complex stringy shape. Leptons and Quarks have a propeller shape with left or right handed pitch creating charge difference. Gluons, Photons and Neutrino particles have no comparable pitch.

49 citations


Posted Content
01 Feb 2020-viXra
TL;DR: Vuyk et al. as mentioned in this paper presented a possible solution based on complex 3-D ring shaped particles, which are equipped with three point like hinges and one splitting point, all four locations divided equally over the ring surface.
Abstract: 3-Dimensional Rigid Transformer Strings with Composite Topological Strings Leo Vuyk, Architect, Rotterdam, the Netherlands. LeoVuyk@Gmail.com Abstract, In particle physics it is an interesting challenge to postulate that the FORM and structure of elementary particles is the origin of different FUNCTIONS of these particles. In this paper we present a possible solution based on complex 3-D ring shaped particles, which are equipped with three point like hinges and one splitting point, all four locations divided equally over the ring surface. The 3-D ring itself is postulated to represent the “Virgin Mother” of all other particles and is coined Axion Higgs particle, supplied with the 3-hinges coded (OOO), which gives the particle the opportunity to transform after real mechanical collision with other particles into a different shape, with a different function. Thus in this Quantum Function Follows Form theory, (Q-FFF) the Axion Higgs is interpreted as a massless transformer particle able to create the universe by transforming its shape after real mechanical collision and merge with other shaped particles into composite and compound knots.

49 citations


Journal ArticleDOI
TL;DR: In this paper, diffusive shock acceleration (DSA) of electrons in non-relativistic quasi-perpendicular shocks using self-consistent one-dimensional particle-in-cell (PIC) simulations is studied.
Abstract: We study diffusive shock acceleration (DSA) of electrons in non-relativistic quasi-perpendicular shocks using self-consistent one-dimensional particle-in-cell (PIC) simulations. By exploring the parameter space of sonic and Alfvenic Mach numbers we find that high Mach number quasi-perpendicular shocks can efficiently accelerate electrons to power-law downstream spectra with slopes consistent with DSA prediction. Electrons are reflected by magnetic mirroring at the shock and drive non-resonant waves in the upstream. Reflected electrons are trapped between the shock front and upstream waves and undergo multiple cycles of shock drift acceleration before the injection into DSA. Strong current-driven waves also temporarily change the shock obliquity and cause mild proton pre-acceleration even in quasi-perpendicular shocks, which otherwise do not accelerate protons. These results can be used to understand nonthermal emission in supernova remnants and intracluster medium in galaxy clusters.

36 citations


Journal ArticleDOI
TL;DR: The Future Circular Colliders as mentioned in this paper integrated program foresees operation in two stages: initially an electron-positron collider serving as a Higgs and electroweak factory running at different center-of-mass energies, followed by a proton-proton collider at a collision energy of 100 TeV.
Abstract: Particle physics has arrived at an important moment of its history. The discovery of the Higgs boson has completed the Standard Model, the core theory behind the known set of elementary particles and fundamental interactions. However, the Standard Model leaves important questions unanswered, such as the nature of dark matter, the origin of the matter–antimatter asymmetry in the Universe, and the existence and hierarchy of neutrino masses. To address these questions and the origin of the newly discovered Higgs boson, high-energy colliders are required. Future generations of such machines must be versatile, as broad and powerful as possible with a capacity of unprecedented precision, sensitivity and energy reach. Here, we argue that the Future Circular Colliders offer unique opportunities, and discuss their physics motivation, key measurements, accelerator strategy, research and development status, and technical challenges. The Future Circular Collider integrated programme foresees operation in two stages: initially an electron–positron collider serving as a Higgs and electroweak factory running at different centre-of-mass energies, followed by a proton–proton collider at a collision energy of 100 TeV. The interplay between measurements at the two collider stages underscores the synergy of their physics potentials. The Future Circular Colliders are proposed as a future step after the Large Hadron Collider has stopped running. The first stage foresees collision of electron–positron pairs before a machine upgrade to allow proton–proton operation.

Journal ArticleDOI
TL;DR: In this paper, the authors systematically study scenarios for multi-component dark matter based on various ZN symmetries (N ≤ 10) and with different sets of scalar fields charged under it, and explicitly obtain and illustrate the regions of parameter space that are consistent with up to five dark matter particles.
Abstract: The dark matter may consist not of one elementary particle but of different species, each of them contributing a fraction of the observed dark matter density. A major theoretical difficulty with this scenario — dubbed multi-component dark matter — is to explain the stability of these distinct particles. Imposing a single ZN symmetry, which may be a remnant of a spontaneously broken U(1) gauge symmetry, seems to be the simplest way to simultaneously stabilize several dark matter particles. In this paper we systematically study scenarios for multi-component dark matter based on various ZN symmetries (N ≤ 10) and with different sets of scalar fields charged under it. A generic feature of these scenarios is that the number of stable particles is not determined by the Lagrangian but depends on the relations among the masses of the different fields charged under the ZN symmetry. We explicitly obtain and illustrate the regions of parameter space that are consistent with up to five dark matter particles. For N odd, all these particles turn out to be complex, whereas for N even one of them may be real. Within this framework, many new models for multi-component dark matter can be implemented.

Journal ArticleDOI
TL;DR: In this article, the dominant two-loop corrections to the Higgs trilinear coupling and the quartic coupling were investigated in models with extended Higgs sectors, using the effective potential approximation.
Abstract: We compute the dominant two-loop corrections to the Higgs trilinear coupling $$\lambda _{hhh}$$ and to the Higgs quartic coupling $$\lambda _{hhhh}$$ in models with extended Higgs sectors, using the effective-potential approximation. We provide in this paper all necessary details about our calculations, and present general $$\overline{{\mathrm{MS}}}$$ expressions for derivatives of the integrals appearing in the effective potential at two loops. We also consider three particular Beyond-the-Standard-Model (BSM) scenarios – namely a typical scenario of an Inert Doublet Model (IDM), and scenarios of a Two-Higgs-Doublet Model (2HDM) and of a Higgs Singlet Model (HSM) without scalar mixing – and we include all the necessary finite counterterms to obtain (in addition to $$\overline{{\mathrm{MS}}}$$ results) on-shell scheme expressions for the corrections to the Higgs self-couplings. With these analytic results, we investigate the possible magnitude of two-loop BSM contributions to the Higgs self-couplings and the fate of the non-decoupling effects that are known to appear at one loop. We find that, at least as long as pertubative unitarity conditions are fulfilled, the size of two-loop corrections remains well below that of one-loop corrections. Typically, two-loop contributions to $$\lambda _{hhh}$$ amount to approximately 20% of those at one loop, implying that the non-decoupling effects observed at one loop are not significantly modified, but also meaning that higher-order corrections need to be taken into account for the future perspective of precise measurements of the Higgs trilinear coupling.

Journal ArticleDOI
Albert M. Sirunyan1, Armen Tumasyan1, Wolfgang Adam, Federico Ambrogi  +2303 moreInstitutions (170)
TL;DR: A novel jet reconstruction technique is used for the first time at the LHC, which improves the precision by a factor of 3 relative to an earlier measurement.
Abstract: A measurement is reported of the jet mass distribution in hadronic decays of boosted top quarks produced in pp collisions at s=13 TeV. The data were collected with the CMS detector at the LHC and correspond to an integrated luminosity of 35.9 fb-1. The measurement is performed in the lepton+jets channel of tt¯ events, where the lepton is an electron or muon. The products of the hadronic top quark decay t→bW→bqq¯′ are reconstructed as a single jet with transverse momentum larger than 400 GeV. The tt¯ cross section as a function of the jet mass is unfolded at the particle level and used to extract a value of the top quark mass of 172.6±2.5 GeV. A novel jet reconstruction technique is used for the first time at the LHC, which improves the precision by a factor of 3 relative to an earlier measurement. This highlights the potential of measurements using boosted top quarks, where the new technique will enable future precision measurements.

Journal ArticleDOI
TL;DR: In this paper, a detailed analysis of the most promising decay channels and derive the expected sensitivity of their combination, assuming an integrated luminosity of 30 $$\hbox {ab}^{-1}$$.
Abstract: Higgs pair production provides a unique handle for measuring the strength of the Higgs self interaction and constraining the shape of the Higgs potential. Among the proposed future facilities, a circular 100 TeV proton–proton collider would provide the most precise measurement of this crucial quantity. In this work, we perform a detailed analysis of the most promising decay channels and derive the expected sensitivity of their combination, assuming an integrated luminosity of 30 $$\hbox {ab}^{-1}$$ . Depending on the assumed detector performance and systematic uncertainties, we observe that the Higgs self-coupling will be measured with a precision in the range 3.4–7.8% at 68% confidence level.

Journal ArticleDOI
TL;DR: In this article, accurate relativistic coupled cluster calculations of the nuclear magnetic quadrupole moments (MQM) interaction constants in BaF, YbF, BaOH, and YbOH are presented.
Abstract: Nuclear magnetic quadrupole moments (MQMs), such as intrinsic electric dipole moments of elementary particles, violate both parity and time-reversal symmetry and, therefore, probe physics beyond the standard model. We report on accurate relativistic coupled cluster calculations of the nuclear MQM interaction constants in BaF, YbF, BaOH, and YbOH. We elaborate on estimates of the uncertainty of our results. The implications of experiments searching for nonzero nuclear MQMs are discussed.

Georges Aad1, Brad Abbott2, Dale Charles Abbott3, A. Abed Abud4  +2978 moreInstitutions (217)
22 Apr 2020
TL;DR: In this paper, the first observation of the electroweak symmetry breaking process at the Large Hadron Collider with spin one was reported, with an integrated luminosity of 139 fb$^{-1}$ recorded at a centre of mass energy of 13 TeV by the ATLAS detector.
Abstract: Electroweak symmetry breaking explains the origin of the masses of elementary particles via their interactions with the Higgs field. Besides the measurements of the Higgs boson properties, the study of the scattering of massive vector bosons (with spin one) at the Large Hadron Collider allows to probe the nature of electroweak symmetry breaking with an unprecedented sensitivity. Among all processes related to vector-boson scattering, the electroweak production of two jets and a $Z$-boson pair is a rare and important one. This article reports on the first observation of this process using proton-proton collision data corresponding to an integrated luminosity of 139 fb$^{-1}$ recorded at a centre-of-mass energy of 13 TeV by the ATLAS detector. Two different final states originating from the decays of the $Z$-boson pair, one containing four charged leptons and the other containing two charged leptons and two neutrinos, are considered. The hypothesis of no electroweak production is rejected with a statistical significance of 5.5 $\sigma$, and the measured cross-section for electroweak production is consistent with the Standard Model prediction. In addition, cross-sections for inclusive production of a $Z$-boson pair and two jets are reported for the two final states.

Journal ArticleDOI
TL;DR: In this paper, the background generated by muon beams of $750$ GeV is characterized and the performance of the tracking system and the calorimeter detector are illustrated to obtain track and jet reconstruction performance.
Abstract: A muon collider represents the ideal machine to reach very high center-of-mass energies and luminosities by colliding elementary particles. This is the result of the low level of beamstrahlung and synchrotron radiation compared to linear or circular electron-positron colliders. In contrast with other lepton machines, the design of a detector for a multi-TeV muon collider requires the knowledge of the interaction region due to the presence of a large amount of background induced by muon beam decays. The physics reaches can be properly evaluated only when the detector performance is determined. In this work, the background generated by muon beams of $750$ GeV is characterized and the performance of the tracking system and the calorimeter detector are illustrated. Solutions to minimize the effect of the beam-induced background are discussed and applied to obtain track and jet reconstruction performance. The $\mu^+\mu^-\to H u\bar{ u}\to b\bar b u\bar{ u}$ process is fully simulated and reconstructed to demonstrate that physics measurements are possible in this harsh environment. The precision on Higgs boson coupling to $b\bar b$ is evaluated for $\sqrt{s}=1.5$, 3, and 10 TeV and compared to other proposed machines.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the circumstances under which this complementarity between collider and radio signals of dark matter can be useful in probing physics beyond the standard model of elementary particles.
Abstract: A weakly interacting dark matter candidate is difficult to detect at high-energy colliders like the LHC, if its mass is close to or higher than a TeV. On the other hand, pair annihilation of such particles may give rise to ${e}^{+}{e}^{\ensuremath{-}}$ pairs in dwarf spheroidal galaxies (dSph), which in turn can lead to radio synchrotron signals that are detectable at the upcoming Square Kilometre Array (SKA) telescope within a moderate observation time. We investigate the circumstances under which this complementarity between collider and radio signals of dark matter can be useful in probing physics beyond the standard model of elementary particles. Both particle physics issues and the roles of diffusion and electromagnetic energy loss of the ${e}^{\ifmmode\pm\else\textpm\fi{}}$ are taken into account. First, the criteria for detectability of trans-TeV dark matter are analyzed independently of the particle physics model(s) involved. We thereafter use some benchmarks based on a popular scenario, namely, the minimal supersymmetric standard model. It is thus shown that the radio flux from a dSph like Draco should be observable in about 100 h at the SKA, for dark matter masses up to 4--8 TeV. In addition, the regions in the space spanned by astrophysical parameters, for which such signals should be detectable at the SKA, are marked out.

10 Oct 2020
TL;DR: In this paper, a search for the Zγ decay of the Higgs boson, with Z boson decays into pairs of electrons or muons is presented, using proton-proton collision data at √s=13 TeV corresponding to an integrated luminosity of 139 fb−1 recorded by the ATLAS detector at the Large Hadron Collider.
Abstract: A search for the Zγ decay of the Higgs boson, with Z boson decays into pairs of electrons or muons is presented. The analysis uses proton–proton collision data at √s=13 TeV corresponding to an integrated luminosity of 139 fb−1 recorded by the ATLAS detector at the Large Hadron Collider. The observed data are consistent with the expected background with a p-value of 1.3%. An upper limit at 95% confidence level on the production cross-section times the branching ratio for pp→H→Zγ is set at 3.6 times the Standard Model prediction while 2.6 times is expected in the presence of the Standard Model Higgs boson. The best-fit value for the signal yield normalised to the Standard Model prediction is 2.0−0.9+1.0 where the statistical component of the uncertainty is dominant.

Posted Content
01 Aug 2020-viXra
TL;DR: In this paper, a new proposal for the mathematics of time and space, as an application of mathematics to the paradigm of time, is proposed, as the calculus of time-points in space, here as Temporal Calculus, a calculus not focusing on space primarily, yet time.
Abstract: The application of the Calculus of Infinitesimals (differentials/integrals) to physical analysis, given the paradoxical lack of precise particle definition it grants the study of the elementary particles, despite the precision of such mathematics itself, and therefore its application to particle physics, is questioned. To offer more mathematical precision of definition to the elementary particles, a new proposal for the mathematics of time and space, as an application of mathematics to the paradigm of time, is proposed, as the calculus of time-points in space, here as “Temporal Calculus”, a calculus not focusing on space primarily, yet time. As a standard of reference, this time-algorithm is based on the human temporal perception ability in the three paradigms most commonly associated to the human temporal perception ability, namely time-before, time-now, and time-after, assigning mathematical values to those qualities that then give rise to the “golden-ratio” equation, which when applied to 3-d space forms a fractal (golden-ratio) lattice of time-points that is able to derive all the known equations and constants of physical phenomena, from mass to charge, particle energy to particle spin, elementary and standard particles, presenting the case not for an infinitely metrically expanding universe, yet a steady-state time-space system that successfully links the CMBR with the vacuum permittivity and permeability, together with calculating the Yang-Mills mas gap and associated elementary particle phenomena, while finally explaining the existence of antimatter, all via a field of time-points in space.

Posted Content
01 Sep 2020-viXra
TL;DR: In this article, the formation of the elementary particles as mass-structures, defining their confinement and relationship to their over-arching quantum environment, detailing the concepts of symmetry breaking, asymptotic freedom, particle confinement, and baryon asymmetry, as a follow-on from the proposed solution to the Yang Mills existence and mass gap problem of the previous paper.
Abstract: Here Temporal Calculus explains the formation of the elementary particles as mass-structures, defining their confinement and relationship to their over-arching quantum environment, detailing the concepts of symmetry breaking, asymptotic freedom, particle confinement, and baryon asymmetry, as a follow-on from the proposed solution to the “Yang Mills existence and mass gap” problem of the previous paper, here moreover introducing a new phenomenon, namely “atomic barrier enhancement”, pointing to mechanisms for new atomic fuel generation, here explained in the form of Hydrogen fuel generation.


Posted Content
01 Aug 2020-viXra
TL;DR: In this article, it is shown that the birth mass of an elementary particle is a result of spherical deformation of the quantized space-time based on the concept of gravity of the curved four-dimensional space time of Einstein.
Abstract: It is shown that the birth mass of an elementary particle is a result of spherical deformation of the quantized space-time based on the concept of gravity of the curved four-dimensional space-time of Einstein. Theorists mistakenly believe that Einstein's theory of gravity does not fit into the Standard Model (SM). It is shown that on the contrary the SM does not fit into the Einstein's theory of gravity. Higgs boson is contradicts the concept of curved space-time as the basis of gravity. Therefore, Higgs boson is cannot carry the mass of an elementary particle. Mechanism for the generation of mass of an elementary particle discussed in detail in the theory of Superunification.

Journal ArticleDOI
TL;DR: This paper proposes to solve the Higgs Boson Classification Problem with four Machine Learning Methods, using the Pyspark environment: Logistic Regression (LR), Decision Tree (DT), Random Forest (RF) and Gradient Boosted Tree (GBT).

Journal ArticleDOI
TL;DR: In this article, the first time the so-called QCD and electroweak complete-NLO predictions for $H b \bar b$ production, using the four-flavour scheme, were calculated at the LHC and it was shown that not only the $gg$F but also the $ZH$ and even the vector-boson fusion channels are irreducible backgrounds.
Abstract: The hadroproduction of a Higgs boson in association with a bottom-quark pair ($H b \bar b$) is commonly considered as the key process for directly probing the Yukawa interaction between the Higgs boson and the bottom quark ($y_b$). However, in the Standard-Model (SM) this process is also known to suffer from very large irreducible backgrounds from other Higgs production channels, notably gluon-fusion ($gg$F). In this paper we calculate for the first time the so-called QCD and electroweak complete-NLO predictions for $H b \bar b$ production, using the four-flavour scheme. Our calculation shows that not only the $gg$F but also the $ZH$ and even the vector-boson fusion channels are sizeable irreducible backgrounds. Moreover, we demonstrate that, at the LHC, the rates of these backgrounds are very large with respect to the "genuine" and $y_b$-dependent $H b \bar b$ production mode. In particular, no suppression occurs at the differential level and therefore backgrounds survive typical analysis cuts. This fact further jeopardises the chances of measuring at the LHC the $y_b$-dependent component of $H b \bar b$ production in the SM. Especially, unless $y_b$ is significantly enlarged by new physics, even for beyond-the SM scenarios the direct determination of $y_b$ via this process seems to be hopeless at the LHC.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the phenomenology of the two Higgs doublet model with a real singlet scalar S (N2HDM) and found that a large singlet-doublet admixture is still compatible with the recent Higgs data from LHC.
Abstract: We study the phenomenology of the two Higgs doublet model with a real singlet scalar S (N2HDM). The model predicts three CP-even Higgses $$h_{1,2,3}$$, one CP-odd $$A^0$$ and a pair of charged Higgs. We discuss the consistency of the N2HDM with theoretical as well as with all available experimental data. In contrast with previous studies, we focus on the scenario where $$h_2$$ is the Standard Model (SM) 125 GeV Higgs, while $$h_1$$ is lighter than $$h_2$$ which may open a window for Higgs to Higgs decays. We perform an extensive scan into the parameter space of N2HDM of type I and explore the effect of the singlet-doublet admixture. We found that a large singlet-doublet admixture is still compatible with the recent Higgs data from LHC. Moreover, we show that $$h_1$$ could be quasi-fermiophobic and would decay dominantly into two photons. We also study in details the consistency of the non-detected decay of $$h_2\rightarrow h_1 h_1$$ with LHC data followed by $$h_1\rightarrow \gamma \gamma $$ which leads to four photons final state at LHC: $$pp\rightarrow h_2\rightarrow h_1 h_1\rightarrow 4\gamma $$. Using the results of null searches of multi-photons carried by the ATLAS collaboration, we have found that a large area of the parameter space is still allowed. We also demonstrate that various neutral Higgs of the N2HDM could have several exotic decays.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new leptogenesis scenario in which the lepton asymmetry and matter particles are simultaneously generated due to the coherent oscillating Higgs background, and they considered the type-I seesaw model as an illuminating example.
Abstract: We propose a new leptogenesis scenario in which the lepton asymmetry and matter particles are simultaneously generated due to the coherent oscillating Higgs background. To demonstrate the possibility of our scenario, we consider the type-I seesaw model as an illuminating example and show the numerical analysis. In order to generate the required lepton number $$|n_L/s| = 2.4 \times 10^{-10}$$ , we find that the scales of the Higgs background oscillation is required to be higher than $$10^{14}$$ GeV.

Posted Content
01 Apr 2020-viXra
TL;DR: In this article, a causal extension of the Feynman-Gell-Mann electron model has been proposed to consider isolated particles and real constitutive wave elements as localized, extended spacetime structures (i.e., moving within time-like hypertubes or M-Theoretic higher dimensional brane topologies).
Abstract: Traditionally, elementary particles, by definition are considered zero-dimensional (0D) or point-like elements; strings or branes on the other hand are dimensionally extended entities. Dirac’s electron hypertube model appears to provide insight into this duality. Recent attempts to consider isolated particles and real constitutive wave elements as localized, extended spacetime structures (i.e., moving within time-like hypertubes or M-Theoretic higher dimensional (HD) brane topologies) are developed within a causal extension of the Feynman-Gell-Mann electron model. These extended structures contain real internal motions, (i.e., internal hidden parameters) locally correlated with the "hidden parameters" describing the local collective motions of the corresponding pilot-waves. The Dirac electron hypertube has been missed by the uncertainty principle. Recent experimental evidence and new protocols for supervening uncertainty are discussed.

Posted Content
TL;DR: In this article, the authors provide a simple geometric meaning for deformations of so-called $T{overline T}$ type in relativistic and non-relativistic systems.
Abstract: We provide a simple geometric meaning for deformations of so-called $T{\overline T}$ type in relativistic and non-relativistic systems. Deformations by the cross products of energy and momentum currents in integrable quantum field theories are known to modify the thermodynamic Bethe ansatz equations by a "CDD factor". In turn, CDD factors may be interpreted as additional, fixed shifts incurred in scattering processes: a finite width added to the fundamental particles (or, if negative, to the free space between them). We suggest that this physical effect is a universal way of understanding $T{\overline T}$ deformations, both in classical and quantum systems. We first show this in non-relativistic systems, with particle conservation and translation invariance, using the deformation formed out of the densities and currents of particles and momentum. This holds at the level of the equations of motion, and for any interaction potential, integrable or not. We then argue, and show by similar techniques in free relativistic particle systems, that $T\overline T$ deformations of relativistic systems produce the equivalent phenomenon, accounting for length contractions. We also show that, in both the relativistic and non-relativistic cases, the width of particles is equivalent to a state-dependent change of metric, where the distance function discounts the particles' widths, or counts the additional free space. This generalises and explains the known field-dependent coordinate change describing $T\overline T$ deformations. The results connect such deformations with generalised hydrodynamics, where the relations between scattering shifts, widths of particles and state-dependent changes of metric have been established.