scispace - formally typeset
Search or ask a question

Showing papers by "Frankfurt Institute for Advanced Studies published in 2015"


Journal ArticleDOI
TL;DR: In this article, the authors describe the changes required to the model to reproduce in detail the new data available from LHC and the consequences in the interpretation of these data, in particular the effect of the collective hadronization in p-p scattering.
Abstract: EPOS is a Monte-Carlo event generator for minimum bias hadronic interac- tions, used for both heavy ion interactions and cosmic ray air shower simulations. Since the last public release in 2009, the LHC experiments have provided a number of very inter- esting data sets comprising minimum bias p-p, p-Pb and Pb-Pb interactions. We describe the changes required to the model to reproduce in detail the new data available from LHC and the consequences in the interpretation of these data. In particular we discuss the effect of the collective hadronization in p-p scattering. A different parametrization of flow has been introduced in the case of a small volume with high density of thermalized matter (core) reached in p-p compared to large volume produced in heavy ion collisions. Both parametrizations depend only on the geometry and the amount of secondary particles en- tering in the core and not on the beam mass or energy. The transition between the two flow regimes can be tested with p-Pb data. EPOS LHC is able to reproduce all minimum bias results for all particles with transverse momentum from pt = 0 to a few GeV/c.

939 citations


Journal ArticleDOI
TL;DR: This work organizes the available and potential novel statistical/modeling approaches according to their biophysical interpretability of cross-frequency coupling to provide a road map towards an improved mechanistic understanding of CFC.

472 citations


Journal ArticleDOI
TL;DR: The results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the L FP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.
Abstract: Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

374 citations


Journal ArticleDOI
TL;DR: An atomic-resolution fibril structure of the Aβ1-40 peptide with the Osaka mutation (E22Δ), associated with early-onset AD is reported, which differs substantially from all previously proposed models.
Abstract: Despite its central importance for understanding the molecular basis of Alzheimer's disease (AD), high-resolution structural information on amyloid β-peptide (Aβ) fibrils, which are intimately linked with AD, is scarce. We report an atomic-resolution fibril structure of the Aβ1-40 peptide with the Osaka mutation (E22Δ), associated with early-onset AD. The structure, which differs substantially from all previously proposed models, is based on a large number of unambiguous intra- and intermolecular solid-state NMR distance restraints.

248 citations


Journal ArticleDOI
TL;DR: In this article, a general formalism is proposed to describe the shadow as an arbitrary polar curve expressed in terms of a Legendre expansion, which does not assume any knowledge of the properties of the shadow, e.g. the location of its center.
Abstract: A large international effort is under way to assess the presence of a shadow in the radio emission from the compact source at the centre of our Galaxy, Sagittarius A* (Sgr A*). If detected, this shadow would provide the first direct evidence of the existence of black holes and that Sgr A* is a supermassive black hole. In addition, the shape of the shadow could be used to learn about extreme gravity near the event horizon and to determine which theory of gravity better describes the observations. The mathematical description of the shadow has so far used a number of simplifying assumptions that are unlikely to be met by the real observational data. We here provide a general formalism to describe the shadow as an arbitrary polar curve expressed in terms of a Legendre expansion. Our formalism does not presume any knowledge of the properties of the shadow, e.g. the location of its centre, and offers a number of routes to characterize the distortions of the curve with respect to reference circles. These distortions can be implemented in a coordinate-independent manner by different teams analysing the same data. We show that the new formalism provides an accurate and robust description of noisy observational data, with smaller error variances when compared to previous approaches for the measurement of the distortion.

183 citations


Journal ArticleDOI
TL;DR: In this paper, a viscous hybrid model employing the hadron transport approach UrQMD for the early and late nonequilibrium stages of the reaction, and 3+1 dimensional viscous hydrodynamics for the hot and dense quark-gluon plasma stage, is introduced.
Abstract: Hybrid approaches based on relativistic hydrodynamics and transport theory have been successfully applied for many years for the dynamical description of heavy-ion collisions at ultrarelativistic energies. In this work a new viscous hybrid model employing the hadron transport approach UrQMD for the early and late nonequilibrium stages of the reaction, and 3+1 dimensional viscous hydrodynamics for the hot and dense quark-gluon plasma stage, is introduced. This approach includes the equation of motion for finite baryon number and employs an equation of state with finite net-baryon density to allow for calculations in a large range of beam energies. The parameter space of the model is explored and constrained by comparison with the experimental data for bulk observables from Super Proton Synchrotron and the phase I beam energy scan at Relativistic Heavy Ion Collider. The favored parameter values depend on energy but allow extraction of the effective value of the shear viscosity coefficient over entropy density ratio $\ensuremath{\eta}/s$ in the fluid phase for the whole energy region under investigation. The estimated value of $\ensuremath{\eta}/s$ increases with decreasing collision energy, which may indicate that $\ensuremath{\eta}/s$ of the quark-gluon plasma depends on baryochemical potential ${\ensuremath{\mu}}_{B}$.

142 citations


Journal ArticleDOI
TL;DR: In this paper, a modified, self-dual Schwarzschild-like metric was proposed to reproduce desirable aspects of a variety of disparate models in the sub-Planckian limit, while remaining Schwarzschild in the large mass limit.
Abstract: The Black Hole Uncertainty Principle correspondence suggests that there could exist black holes with mass beneath the Planck scale but radius of order the Compton scale rather than Schwarzschild scale. We present a modified, self-dual Schwarzschild-like metric that reproduces desirable aspects of a variety of disparate models in the sub-Planckian limit, while remaining Schwarzschild in the large mass limit. The self-dual nature of this solution under M ↔ M−1 naturally implies a Generalized Uncertainty Principle with the linear form $$ \Delta x\sim \frac{1}{\Delta p}+\Delta p $$ . We also demonstrate a natural dimensional reduction feature, in that the gravitational radius and thermodynamics of sub-Planckian objects resemble that of (1 + 1)-D gravity. The temperature of sub-Planckian black holes scales as M rather than M−1 but the evaporation of those smaller than 10−36 g is suppressed by the cosmic background radiation. This suggests that relics of this mass could provide the dark matter.

126 citations


Journal ArticleDOI
TL;DR: In this article, the connection between black hole thermodynamics and chemistry is extended to the lower-dimensional regime by considering the rotating and charged Ba\~nados, Teitelboim, and Zanelli (BTZ) metric in the ($2+1$)-dimensional and ($1+1)-dimensional limits of Einstein gravity.
Abstract: The connection between black hole thermodynamics and chemistry is extended to the lower-dimensional regime by considering the rotating and charged Ba\~nados, Teitelboim, and Zanelli (BTZ) metric in the ($2+1$)-dimensional and ($1+1$)-dimensional limits of Einstein gravity. The Smarr relation is naturally upheld in both BTZ cases, where those with $Q\ensuremath{ e}0$ violate the reverse isoperimetric inequality and are thus superentropic. The inequality can be maintained, however, with the addition of a new thermodynamic work term associated with the mass renormalization scale. The $D\ensuremath{\rightarrow}0$ limit of a generic $D+2$-dimensional Einstein gravity theory is also considered to derive the Smarr and Komar relations, although the opposite sign definitions of the cosmological constant and thermodynamic pressure from the $Dg2$ cases must be adopted in order to satisfy the relation. The requirement of positive entropy implies an upper bound on the mass of a $(1+1)\text{\ensuremath{-}}D$ black hole. Promoting an associated constant of integration to a thermodynamic variable allows one to define a ``rotation'' in one spatial dimension. Neither the $D=3$ nor the $D\ensuremath{\rightarrow}2$ black holes exhibit any interesting phase behavior.

121 citations


Journal ArticleDOI
TL;DR: In this paper, the Bjorken flow was analyzed analytically in the presence of a transverse magnetic field and it was shown that the decay of the fluid energy density with proper time τ is the same as for the time-honoured "Bjorken Flow" without magnetic field.

108 citations


Journal ArticleDOI
TL;DR: In this article, the authors studied the cross-sections of superheavy nuclei in heavy ion fusion and multinucleon transfer reactions. But their work was limited to the case of single nuclei.

100 citations



Journal ArticleDOI
TL;DR: In this paper, the ERC Synergy Grant >BlackHoleCam-Imaging the Event Horizon of Black Holes> (Grant 610058) and from the Ministry of Science and Technology of Taiwan under the grant NSC 100-2112-M-007-022-MY3 and MOST 103-21 12-M.007-023-my3.
Abstract: Support comes the ERC Synergy Grant >BlackHoleCam-Imaging the Event Horizon of Black Holes> (Grant 610058) and from the Ministry of Science and Technology of Taiwan under the grant NSC 100-2112-M-007-022-MY3 and MOST 103-2112-M-007-023-MY3. J.L.G. acknowledges support from the Spanish Ministry of Economy and Competitiveness grant AYA2013-40825-P. K.N. and P.H. acknowledge support by NSF awards AST-0908010 and AST-0908040, and by NASA awards NNX09AD16G, NNX12AH06G, NNX13A P-21G, and NNX13AP14G. A.M. acknowledges support from the Scientific Research (FWO) and the Belgian Federal Science Policy Office (Belspo).

Journal ArticleDOI
Leszek Adamczyk1, J. K. Adkins2, G. Agakishiev3, Madan M. Aggarwal4  +325 moreInstitutions (44)
TL;DR: In this paper, the two-and four-particle cumulants, v(2) vs. multiplicity, were obtained for charged hadrons from U + U collisions at root s(NN) = 193 GeV and Au + Au collisions at the root sNN = 200 GeV.
Abstract: Collisions between prolate uranium nuclei are used to study how particle production and azimuthal anisotropies depend on initial geometry in heavy-ion collisions. We report the two- and four-particle cumulants, v(2){2} and v(2){4}, for charged hadrons from U + U collisions at root s(NN) = 193 GeV and Au + Au collisions at root s(NN) = 200 GeV. Nearly fully overlapping collisions are selected based on the energy deposited by spectators in zero degree calorimeters (ZDCs). Within this sample, the observed dependence of v(2){2} on multiplicity demonstrates that ZDC information combined with multiplicity can preferentially select different overlap configurations in U + U collisions. We also show that v(2) vs multiplicity can be better described by models, such as gluon saturation or quark participant models, that eliminate the dependence of the multiplicity on the number of binary nucleon-nucleon collisions.

Journal ArticleDOI
TL;DR: A nonperturbative approach to the thermal production of dileptons and photons at temperatures near the critical temperature in QCD, and the strong suppression of photons in the semi-QGP tends to weight the elliptical flow of photons to that generated in the hadronic phase.
Abstract: We consider a nonperturbative approach to the thermal production of dileptons and photons at temperatures near the critical temperature in QCD. The suppression of colored excitations at low temperature is modeled by including a small value of the Polyakov loop, in a "semi"-quark-gluon plasma (QGP). Comparing the semi-QGP to the perturbative QGP, we find a mild enhancement of thermal dileptons. In contrast, to leading logarithmic order in weak coupling there are far fewer hard photons from the semi-QGP than the usual QGP. To illustrate the possible effects on photon and dilepton production in heavy-ion collisions, we integrate the rate with a simulation using ideal hydrodynamics. Dileptons uniformly exhibit a small flow, but the strong suppression of photons in the semi-QGP tends to weight the elliptical flow of photons to that generated in the hadronic phase.


Journal ArticleDOI
TL;DR: It is shown how most common autoencoders are naturally associated with an energy function, independent of the training procedure, and that the energy landscape can be inferred analytically by integrating the reconstruction function of the autoencoder.
Abstract: Autoencoders are popular feature learning models, that are conceptually simple, easy to train and allow for efficient inference. Recent work has shown how certain autoencoders can be associated with an energy landscape, akin to negative log-probability in a probabilistic model, which measures how well the autoencoder can represent regions in the input space. The energy landscape has been commonly inferred heuristically, by using a training criterion that relates the autoencoder to a probabilistic model such as a Restricted Boltzmann Machine (RBM). In this paper we show how most common autoencoders are naturally associated with an energy function, independent of the training procedure, and that the energy landscape can be inferred analytically by integrating the reconstruction function of the autoencoder. For autoencoders with sigmoid hidden units, the energy function is identical to the free energy of an RBM, which helps shed light onto the relationship between these two types of model. We also show that the autoencoder energy function allows us to explain common regularization procedures, such as contractive training, from the perspective of dynamical systems. As a practical application of the energy function, a generative classifier based on class-specific autoencoders is presented.

Journal ArticleDOI
TL;DR: It is shown in ferrets that at eye opening, the cortical response to visual stimulation exhibits several immaturities, including a high density of active neurons that display prominent wave-like activity, a high degree of variability and strong noise correlations.
Abstract: Rapid developmental changes in the response properties of neurons in visual cortex enhance motion discriminability following eye opening. Here the authors show that increases in direction selectivity are accompanied by reductions in the density of active neurons and variability in their responses and levels of noise correlation, changes that depend on the nature of visual experience.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated quarkonium mass spectra in external constant magnetic fields by using QCD sum rules and found that the dominant origin of mass shifts comes from a mixing between η c and J / ψ with a longitudinal spin polarization, accompanied by other subdominant effects such as mixing with higher excited states and continua.
Abstract: We investigate quarkonium mass spectra in external constant magnetic fields by using QCD sum rules. We first discuss a general framework of QCD sum rules necessary for properly extracting meson spectra from current correlators computed in the presence of strong magnetic fields, that is, a consistent treatment of mixing effects caused in the mesonic degrees of freedom. We then implement operator product expansions for pseudoscalar and vector heavy-quark current correlators by taking into account external constant magnetic fields as operators and obtain mass shifts of the lowest-lying bound states η c and J / ψ in the static limit with their vanishing spatial momenta. Comparing results from QCD sum rules with those from hadronic effective theories, we find that the dominant origin of mass shifts comes from a mixing between η c and J / ψ with a longitudinal spin polarization, accompanied by other subdominant effects such as mixing with higher excited states and continua.

Journal ArticleDOI
TL;DR: In this paper, the effects of hadronic rescattering on hadron distributions in high-energy nuclear collisions were investigated using an integrated dynamical approach based on a hybrid model combining (3+1)-dimensional ideal hydrodynamics for the quark gluon plasma (QGP), and a transport model for the hadron resonance gas.
Abstract: We study the effects of hadronic rescattering on hadron distributions in high-energy nuclear collisions by using an integrated dynamical approach. This approach is based on a hybrid model combining (3+1)-dimensional ideal hydrodynamics for the quark gluon plasma (QGP), and a transport model for the hadron resonance gas. Since the hadron distributions are the result of the entire expansion history of the system, understanding the QGP properties requires investigating how rescattering during the hadronic stage affects the final distributions of hadrons. We include multistrange hadrons in our study, and quantify the effects of hadronic rescattering on their mean transverse momenta and elliptic flow. We find that multistrange hadrons scatter less during the hadronic stage than non-strange particles, and thus their distributions reflect the properties of the system in an earlier stage than the distributions of non-strange particles.

Journal ArticleDOI
TL;DR: In this article, a relativistic transport approach incorporating both hadronic and partonic phases, the parton-hadron-string dynamics (PHSD), was proposed to investigate the photon spectra and flow in heavy-ion collisions at CERN Super Proton Synchrotron, BNL Relativistic Heavy Ion Collider, and CERN Large Hadron Collider energies.
Abstract: The direct photon spectra and flow (${v}_{2}, {v}_{3}$) in heavy-ion collisions at CERN Super Proton Synchrotron, BNL Relativistic Heavy Ion Collider, and CERN Large Hadron Collider energies are investigated within a relativistic transport approach incorporating both hadronic and partonic phases, the parton-hadron-string dynamics (PHSD). In the present work, four extensions are introduced compared to our previous calculations: (i) going beyond the soft-photon approximation (SPA) in the calculation of the bremsstrahlung processes $\mathrm{meson}+\mathrm{meson}\ensuremath{\rightarrow}\mathrm{meson}+\mathrm{meson}+\ensuremath{\gamma}$, (ii) quantifying the suppression owing to the Landau-Pomeranchuk-Migdal (LPM) coherence effect, (iii) adding the additional channels $V+N\ensuremath{\rightarrow}N+\ensuremath{\gamma}$ and $\mathrm{\ensuremath{\Delta}}\ensuremath{\rightarrow}N+\ensuremath{\gamma}$, and (iv) providing PHSD calculations for $\mathrm{Pb}+\mathrm{Pb}$ collisions at $\sqrt{{s}_{NN}}=2.76\phantom{\rule{4pt}{0ex}}\text{TeV}$. The first issue extends the applicability of the bremsstrahlung calculations to higher photon energies to understand the relevant sources in the region ${p}_{T}=0.5--1.5\phantom{\rule{4pt}{0ex}}\text{GeV}$, while the LPM correction turns out to be important for ${p}_{T}l0.4\phantom{\rule{4pt}{0ex}}\text{GeV}$ in the partonic phase. The results suggest that a large elliptic flow ${v}_{2}$ of the direct photons signals a significant contribution of photons produced in interactions of secondary mesons and baryons in the late (hadronic) stage of the heavy-ion collision. To further differentiate the origin of the direct photon azimuthal asymmetry (late hadron interactions vs electromagnetic fields in the initial stage), we provide predictions for the photon spectra, elliptic flow, and triangular flow ${v}_{3}({p}_{T})$ of direct photons at different centralities to be tested by the experimental measurements at the LHC energies. Additionally, we illustrate the magnitude of the photon production in the partonic and hadronic phases as functions of time and local energy density. Finally, the ``cocktail'' method for an estimation of the background photon elliptic flow, which is widely used in the experimental works, is supported by the calculations within the PHSD transport approach.

Journal ArticleDOI
TL;DR: In this article, Monte Carlo results in lattice QCD for the pressure and energy density at small temperature $Tl155$ MeV and zero baryonic chemical potential are analyzed within the hadron resonance gas model.
Abstract: The Monte Carlo results in lattice QCD for the pressure and energy density at small temperature $Tl155$ MeV and zero baryonic chemical potential are analyzed within the hadron resonance gas model. Two extensions of the ideal hadron resonance gas are considered: the excluded volume model, which describes a repulsion of hadrons at short distances, and the Hagedorn model with an exponential mass spectrum. Considering both of these models we do not find conclusive evidence in favor of either of them. The controversial results appear because of rather different sensitivities of the pressure and energy density to both excluded volume and Hagedorn mass spectrum effects. On the other hand, we have found clear evidence for a simultaneous presence of both of them. They lead to rather essential contributions: suppression effects for thermodynamical functions of the hadron resonance gas due to the excluded volume effects and enhancement due to the Hagedorn mass spectrum.

Journal ArticleDOI
TL;DR: The van der Waals (VDW) equation of state is a simple and popular model to describe the pressure function in equilibrium systems of particles with both repulsive and attractive interactions.
Abstract: The van der Waals (VDW) equation of state is a simple and popular model to describe the pressure function in equilibrium systems of particles with both repulsive and attractive interactions. This equation predicts the existence of a first-order liquid-gas phase transition and contains a critical point. Two steps to extend the VDW equation and make it appropriate for new physical applications are carried out in this paper: (i) the grand canonical ensemble formulation and (ii) the inclusion of the quantum statistics. The VDW equation with Fermi statistics is then applied to a description of the system of interacting nucleons. The VDW parameters $a$ and $b$ are fixed to reproduce the properties of nuclear matter at saturation density ${n}_{0}=0.16\phantom{\rule{4pt}{0ex}}{\mathrm{fm}}^{\ensuremath{-}3}$ and zero temperature. The model predicts a location of the critical point for the symmetric nuclear matter at temperature ${T}_{c}\ensuremath{\cong}19.7$ MeV and nucleon number density ${n}_{c}\ensuremath{\cong}0.07\phantom{\rule{4pt}{0ex}}{\mathrm{fm}}^{\ensuremath{-}3}$.

Journal ArticleDOI
TL;DR: This work is an important step in bringing biologically optimized treatment planning for proton therapy closer to the clinical practice as it will allow researchers to refine and compare pre-defined as well as user-defined models.
Abstract: The aim of this work is to extend a widely used proton Monte Carlo tool, TOPAS, towards the modeling of relative biological effect (RBE) distributions in experimental arrangements as well as patients. TOPAS provides a software core which users configure by writing parameter files to, for instance, define application specific geometries and scoring conditions. Expert users may further extend TOPAS scoring capabilities by plugging in their own additional C++ code. This structure was utilized for the implementation of eight biophysical models suited to calculate proton RBE. As far as physics parameters are concerned, four of these models are based on the proton linear energy transfer, while the others are based on DNA double strand break induction and the frequency-mean specific energy, lineal energy, or delta electron generated track structure. The biological input parameters for all models are typically inferred from fits of the models to radiobiological experiments. The model structures have been implemented in a coherent way within the TOPAS architecture. Their performance was validated against measured experimental data on proton RBE in a spread-out Bragg peak using V79 Chinese Hamster cells. This work is an important step in bringing biologically optimized treatment planning for proton therapy closer to the clinical practice as it will allow researchers to refine and compare pre-defined as well as user-defined models.

Journal ArticleDOI
TL;DR: In this article, the correlation between a pair of hyperons emitted from a heavy-ion collision is shown to be sensitive to the $\mathrm{\ensuremath{\Lambda}}\phantom{\rule{0}{0ex}}\mathrm{enuremath{lambda}$ interaction.
Abstract: The correlation between a pair of $\mathrm{\ensuremath{\Lambda}}$ hyperons emitted from a heavy-ion collision is shown to be sensitive to the $\mathrm{\ensuremath{\Lambda}}\phantom{\rule{0}{0ex}}\mathrm{\ensuremath{\Lambda}}$ interaction. The result bears on the existence of the $H$ dibaryon. A competing analysis by the STAR Collaboration appears in PRL 114, 022301 (2015).

Journal ArticleDOI
TL;DR: In this article, the effect of the successive regeneration and decay of resonances after the chemical freeze-out of nucleons is considered, which leads to a randomization of the isospin of the nucleons and thus to additional fluctuations in the net proton number.
Abstract: We investigate net proton fluctuations as important observables measured in heavy-ion collisions within the hadron resonance gas (HRG) model. Special emphasis is given to effects which are a priori not inherent in a thermally and chemically equilibrated HRG approach. In particular, we point out the importance of taking into account the successive regeneration and decay of resonances after the chemical freeze-out, which lead to a randomization of the isospin of nucleons and thus to additional fluctuations in the net proton number. We find good agreement between our model results and the recent STAR measurements of the higher-order moments of the net proton distribution.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate compact objects formed by dark matter admixed with ordinary matter made of neutron-star matter and white-dwarf material, and obtain dark compact planets with Jupiter-like masses and radii of few hundred Km.
Abstract: We investigate compact objects formed by dark matter admixed with ordinary matter made of neutron-star matter and white-dwarf material. We consider non-self annihilating dark matter with an equation of state given by an interacting Fermi gas. We find new stable solutions, dark compact planets, with Earth-like masses and radii from a few Km to few hundred Km for weakly interacting dark matter which are stabilized by the mutual presence of dark matter and compact star matter. For the strongly interacting dark matter case, we obtain dark compact planets with Jupiter-like masses and radii of few hundred Km. These objects could be detected by observing exoplanets with unusually small radii. Moreover, we find that the recently observed $2\text{ }\text{ }{\mathrm{M}}_{\ensuremath{\bigodot}}$ pulsars set limits on the amount of dark matter inside neutron stars which is, at most, $1{0}^{\ensuremath{-}6}\text{ }\text{ }{\mathrm{M}}_{\ensuremath{\bigodot}}$.

Posted Content
TL;DR: In this paper, a modified, self-dual Schwarzschild-like metric was proposed to reproduce desirable aspects of a variety of disparate models in the sub-Planckian limit, while remaining Schwarzschild in the large mass limit.
Abstract: The Black Hole Uncertainty Principle correspondence suggests that there could exist black holes with mass beneath the Planck scale but radius of order the Compton scale rather than Schwarzschild scale. We present a modified, self-dual Schwarzschild-like metric that reproduces desirable aspects of a variety of disparate models in the sub-Planckian limit, while remaining Schwarzschild in the large mass limit. The self-dual nature of this solution under $M \leftrightarrow M^{-1}$ naturally implies a Generalized Uncertainty Principle with the linear form $\Delta x \sim \frac{1}{\Delta p} + \Delta p$. We also demonstrate a natural dimensional reduction feature, in that the gravitational radius and thermodynamics of sub-Planckian objects resemble that of $(1+1)$-D gravity. The temperature of sub-Planckian black holes scales as $M$ rather than $M^{-1}$ but the evaporation of those smaller than $10^{-36}$g is suppressed by the cosmic background radiation. This suggests that relics of this mass could provide the dark matter.

Journal ArticleDOI
TL;DR: In this paper, a coarse-grained time evolution from the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) model is used to calculate thermal dilepton-emission rates by the application of in-medium spectral functions from equilibrium quantum field theoretical calculations.
Abstract: Dilepton invariant-mass spectra for heavy-ion collisions at GSI Schwerionensynchroton (SIS 18) and LBNL Bevalac energies are calculated using a coarse-grained time evolution from the Ultrarelativistic Quantum Molecular Dynamics (UrQMD) model. The coarse graining of the microscopic UrQMD simulations makes it possible to calculate thermal dilepton-emission rates by the application of in-medium spectral functions from equilibrium quantum-field theoretical calculations. The results show that extremely high baryon chemical potentials dominate the evolution of the created hot and dense fireball. Consequently, a significant modification of the $\ensuremath{\rho}$ spectral shape becomes visible in the dilepton invariant-mass spectrum, resulting in an enhancement in the low-mass region ${M}_{ee}=200$ to 600 MeV/${c}^{2}$. This enhancement, mainly caused by baryonic effects on the $\ensuremath{\rho}$ spectral shape, can fully describe the experimentally observed excess above the hadronic cocktail contributions in $\mathrm{Ar}+\mathrm{KCl}$ $({E}_{\mathrm{lab}}=1.76\phantom{\rule{0.28em}{0ex}}A$ GeV) reactions, as measured by the HADES Collaboration and also gives a good explanation of the older DLS $\mathrm{Ca}+\mathrm{Ca}$ $({E}_{\mathrm{lab}}=1.04\phantom{\rule{0.28em}{0ex}}A$ GeV) data. For the larger $\mathrm{Au}+\mathrm{Au}$ $({E}_{\mathrm{lab}}=1.23\phantom{\rule{0.28em}{0ex}}A\phantom{\rule{0.16em}{0ex}}\mathrm{GeV})$ system, we predict an even stronger excess from our calculations. A systematic comparison of the results for different system sizes from $\mathrm{C}+\mathrm{C}$ to $\mathrm{Au}+\mathrm{Au}$ shows that the thermal dilepton yield increases more strongly $(\ensuremath{\propto}{A}^{4/3})$ than the hadronic background contributions, which scale with $A$, owing to its sensitivity on the time evolution of the reaction. We stress that the findings of the present work are consistent with our previous coarse-graining results for dilepton production at the top energy available at the CERN Super Proton Synchrotron (SPS). We argue that it is possible to describe the dilepton results from SIS 18 up to SPS energies by considering the modifications of the $\ensuremath{\rho}$ spectral function inside a hot and dense medium within the same model.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the main features of the production of hyper-fragments in relativistic heavy-ion collisions and demonstrated that the origin of hypernuclei of various masses can be explained by typical baryon interactions.

Journal ArticleDOI
TL;DR: It is demonstrated that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.
Abstract: Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms.