scispace - formally typeset
Search or ask a question
Author

Thomas Lippert

Bio: Thomas Lippert is an academic researcher from Forschungszentrum Jülich. The author has contributed to research in topics: Quantum chromodynamics & Lattice QCD. The author has an hindex of 40, co-authored 174 publications receiving 9463 citations. Previous affiliations of Thomas Lippert include Goethe University Frankfurt & University of Wuppertal.


Papers
More filters
Journal ArticleDOI
M. Aguilar, D. Aisa1, Behcet Alpat, A. Alvino  +291 moreInstitutions (33)
TL;DR: In this paper, a precise measurement of the proton flux in primary cosmic rays with rigidity (momentum/charge) from 1.GV to 1.8TV is presented based on 300 million events.
Abstract: A precise measurement of the proton flux in primary cosmic rays with rigidity (momentum/charge) from 1 GV to 1.8 TV is presented based on 300 million events. Knowledge of the rigidity dependence of the proton flux is important in understanding the origin, acceleration, and propagation of cosmic rays. We present the detailed variation with rigidity of the flux spectral index for the first time. The spectral index progressively hardens at high rigidities.

783 citations

Journal ArticleDOI
01 Feb 2011
TL;DR: The work of the community to prepare for the challenges of exascale computing is described, ultimately combing their efforts in a coordinated International Exascale Software Project.
Abstract: Over the last 20 years, the open-source community has provided more and more software on which the world’s high-performance computing systems depend for performance and productivity. The community has invested millions of dollars and years of effort to build key components. However, although the investments in these separate software elements have been tremendously valuable, a great deal of productivity has also been lost because of the lack of planning, coordination, and key integration of technologies necessary to make them work together smoothly and efficiently, both within individual petascale systems and between different systems. It seems clear that this completely uncoordinated development model will not provide the software needed to support the unprecedented parallelism required for peta/ exascale computation on millions of cores, or the flexibility required to exploit new hardware models and features, such as transactional memory, speculative execution, and graphics processing units. This report describes the work of the community to prepare for the challenges of exascale computing, ultimately combing their efforts in a coordinated International Exascale Software Project.

736 citations

Journal ArticleDOI
21 Jun 2013-Science
TL;DR: BigBrain is a free, publicly available tool that provides considerable neuroanatomical insight into the human brain, thereby allowing the extraction of microscopic data for modeling and simulation, and enables testing of hypotheses on optimal path lengths between interconnected cortical regions or on spatial organization of genetic patterning.
Abstract: Reference brains are indispensable tools in human brain mapping, enabling integration of multimodal data into an anatomically realistic standard space. Available reference brains, however, are restricted to the macroscopic scale and do not provide information on the functionally important microscopic dimension. We created an ultrahigh-resolution three-dimensional (3D) model of a human brain at nearly cellular resolution of 20 micrometers, based on the reconstruction of 7404 histological sections. “BigBrain” is a free, publicly available tool that provides considerable neuroanatomical insight into the human brain, thereby allowing the extraction of microscopic data for modeling and simulation. BigBrain enables testing of hypotheses on optimal path lengths between interconnected cortical regions or on spatial organization of genetic patterning, redefining the traditional neuroanatomy maps such as those of Brodmann and von Economo.

679 citations

Journal ArticleDOI
21 Nov 2008-Science
TL;DR: This work presents a full ab initio calculation of the masses of protons, neutrons, and other light hadrons, using lattice quantum chromodynamics, and represents a quantitative confirmation of this aspect of the Standard Model with fully controlled uncertainties.
Abstract: More than 99% of the mass of the visible universe is made up of protons and neutrons. Both particles are much heavier than their quark and gluon constituents, and the Standard Model of particle physics should explain this difference. We present a full ab initio calculation of the masses of protons, neutrons, and other light hadrons, using lattice quantum chromodynamics. Pion masses down to 190 mega-electron volts are used to extrapolate to the physical point, with lattice sizes of approximately four times the inverse pion mass. Three lattice spacings are used for a continuum extrapolation. Our results completely agree with experimental observations and represent a quantitative confirmation of this aspect of the Standard Model with fully controlled uncertainties.

586 citations

Journal ArticleDOI
L. Accardo1, M. Aguilar, D. Aisa2, D. Aisa1  +308 moreInstitutions (28)
TL;DR: The new results show, for the first time, that above ∼200 GeV the positron fraction no longer exhibits an increase with energy.
Abstract: A precision measurement by AMS of the positron fraction in primary cosmic rays in the energy range from 0.5 to 500 GeV based on 10.9 million positron and electron events is presented. This measurement extends the energy range of our previous observation and increases its precision. The new results show, for the first time, that above ∼200 GeV the positron fraction no longer exhibits an increase with energy.

513 citations


Cited by
More filters
28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal ArticleDOI
Peter A. R. Ade1, Nabila Aghanim2, Monique Arnaud3, M. Ashdown4  +334 moreInstitutions (82)
TL;DR: In this article, the authors present a cosmological analysis based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation.
Abstract: This paper presents cosmological results based on full-mission Planck observations of temperature and polarization anisotropies of the cosmic microwave background (CMB) radiation. Our results are in very good agreement with the 2013 analysis of the Planck nominal-mission temperature data, but with increased precision. The temperature and polarization power spectra are consistent with the standard spatially-flat 6-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations (denoted “base ΛCDM” in this paper). From the Planck temperature data combined with Planck lensing, for this cosmology we find a Hubble constant, H0 = (67.8 ± 0.9) km s-1Mpc-1, a matter density parameter Ωm = 0.308 ± 0.012, and a tilted scalar spectral index with ns = 0.968 ± 0.006, consistent with the 2013 analysis. Note that in this abstract we quote 68% confidence limits on measured parameters and 95% upper limits on other parameters. We present the first results of polarization measurements with the Low Frequency Instrument at large angular scales. Combined with the Planck temperature and lensing data, these measurements give a reionization optical depth of τ = 0.066 ± 0.016, corresponding to a reionization redshift of . These results are consistent with those from WMAP polarization measurements cleaned for dust emission using 353-GHz polarization maps from the High Frequency Instrument. We find no evidence for any departure from base ΛCDM in the neutrino sector of the theory; for example, combining Planck observations with other astrophysical data we find Neff = 3.15 ± 0.23 for the effective number of relativistic degrees of freedom, consistent with the value Neff = 3.046 of the Standard Model of particle physics. The sum of neutrino masses is constrained to ∑ mν < 0.23 eV. The spatial curvature of our Universe is found to be very close to zero, with | ΩK | < 0.005. Adding a tensor component as a single-parameter extension to base ΛCDM we find an upper limit on the tensor-to-scalar ratio of r0.002< 0.11, consistent with the Planck 2013 results and consistent with the B-mode polarization constraints from a joint analysis of BICEP2, Keck Array, and Planck (BKP) data. Adding the BKP B-mode data to our analysis leads to a tighter constraint of r0.002 < 0.09 and disfavours inflationarymodels with a V(φ) ∝ φ2 potential. The addition of Planck polarization data leads to strong constraints on deviations from a purely adiabatic spectrum of fluctuations. We find no evidence for any contribution from isocurvature perturbations or from cosmic defects. Combining Planck data with other astrophysical data, including Type Ia supernovae, the equation of state of dark energy is constrained to w = −1.006 ± 0.045, consistent with the expected value for a cosmological constant. The standard big bang nucleosynthesis predictions for the helium and deuterium abundances for the best-fit Planck base ΛCDM cosmology are in excellent agreement with observations. We also constraints on annihilating dark matter and on possible deviations from the standard recombination history. In neither case do we find no evidence for new physics. The Planck results for base ΛCDM are in good agreement with baryon acoustic oscillation data and with the JLA sample of Type Ia supernovae. However, as in the 2013 analysis, the amplitude of the fluctuation spectrum is found to be higher than inferred from some analyses of rich cluster counts and weak gravitational lensing. We show that these tensions cannot easily be resolved with simple modifications of the base ΛCDM cosmology. Apart from these tensions, the base ΛCDM cosmology provides an excellent description of the Planck CMB observations and many other astrophysical data sets.

10,728 citations

01 Dec 1982
TL;DR: In this article, it was shown that any black hole will create and emit particles such as neutrinos or photons at just the rate that one would expect if the black hole was a body with a temperature of (κ/2π) (ħ/2k) ≈ 10−6 (M/M)K where κ is the surface gravity of the body.
Abstract: QUANTUM gravitational effects are usually ignored in calculations of the formation and evolution of black holes. The justification for this is that the radius of curvature of space-time outside the event horizon is very large compared to the Planck length (Għ/c3)1/2 ≈ 10−33 cm, the length scale on which quantum fluctuations of the metric are expected to be of order unity. This means that the energy density of particles created by the gravitational field is small compared to the space-time curvature. Even though quantum effects may be small locally, they may still, however, add up to produce a significant effect over the lifetime of the Universe ≈ 1017 s which is very long compared to the Planck time ≈ 10−43 s. The purpose of this letter is to show that this indeed may be the case: it seems that any black hole will create and emit particles such as neutrinos or photons at just the rate that one would expect if the black hole was a body with a temperature of (κ/2π) (ħ/2k) ≈ 10−6 (M/M)K where κ is the surface gravity of the black hole1. As a black hole emits this thermal radiation one would expect it to lose mass. This in turn would increase the surface gravity and so increase the rate of emission. The black hole would therefore have a finite life of the order of 1071 (M/M)−3 s. For a black hole of solar mass this is much longer than the age of the Universe. There might, however, be much smaller black holes which were formed by fluctuations in the early Universe2. Any such black hole of mass less than 1015 g would have evaporated by now. Near the end of its life the rate of emission would be very high and about 1030 erg would be released in the last 0.1 s. This is a fairly small explosion by astronomical standards but it is equivalent to about 1 million 1 Mton hydrogen bombs. It is often said that nothing can escape from a black hole. But in 1974, Stephen Hawking realized that, owing to quantum effects, black holes should emit particles with a thermal distribution of energies — as if the black hole had a temperature inversely proportional to its mass. In addition to putting black-hole thermodynamics on a firmer footing, this discovery led Hawking to postulate 'black hole explosions', as primordial black holes end their lives in an accelerating release of energy.

2,947 citations

Journal ArticleDOI
Aviv Regev1, Aviv Regev2, Aviv Regev3, Sarah A. Teichmann4, Sarah A. Teichmann5, Sarah A. Teichmann6, Eric S. Lander3, Eric S. Lander7, Eric S. Lander1, Ido Amit8, Christophe Benoist7, Ewan Birney4, Bernd Bodenmiller9, Bernd Bodenmiller4, Peter J. Campbell5, Peter J. Campbell6, Piero Carninci6, Menna R. Clatworthy10, Hans Clevers11, Bart Deplancke12, Ian Dunham4, James Eberwine13, Roland Eils14, Roland Eils15, Wolfgang Enard16, Andrew Farmer, Lars Fugger17, Berthold Göttgens6, Nir Hacohen3, Nir Hacohen7, Muzlifah Haniffa18, Martin Hemberg5, Seung K. Kim19, Paul Klenerman17, Paul Klenerman20, Arnold R. Kriegstein21, Ed S. Lein22, Sten Linnarsson23, Emma Lundberg19, Emma Lundberg24, Joakim Lundeberg24, Partha P. Majumder, John C. Marioni4, John C. Marioni5, John C. Marioni6, Miriam Merad25, Musa M. Mhlanga26, Martijn C. Nawijn27, Mihai G. Netea28, Garry P. Nolan19, Dana Pe'er29, Anthony Phillipakis3, Chris P. Ponting30, Stephen R. Quake19, Wolf Reik31, Wolf Reik6, Wolf Reik5, Orit Rozenblatt-Rosen3, Joshua R. Sanes7, Rahul Satija32, Ton N. Schumacher33, Alex K. Shalek34, Alex K. Shalek3, Alex K. Shalek1, Ehud Shapiro8, Padmanee Sharma35, Jay W. Shin, Oliver Stegle4, Michael R. Stratton5, Michael J. T. Stubbington5, Fabian J. Theis36, Matthias Uhlen24, Matthias Uhlen37, Alexander van Oudenaarden11, Allon Wagner38, Fiona M. Watt39, Jonathan S. Weissman, Barbara J. Wold40, Ramnik J. Xavier, Nir Yosef38, Nir Yosef34, Human Cell Atlas Meeting Participants 
05 Dec 2017-eLife
TL;DR: An open comprehensive reference map of the molecular state of cells in healthy human tissues would propel the systematic study of physiological states, developmental trajectories, regulatory circuitry and interactions of cells, and also provide a framework for understanding cellular dysregulation in human disease.
Abstract: The recent advent of methods for high-throughput single-cell molecular profiling has catalyzed a growing sense in the scientific community that the time is ripe to complete the 150-year-old effort to identify all cell types in the human body. The Human Cell Atlas Project is an international collaborative effort that aims to define all human cell types in terms of distinctive molecular profiles (such as gene expression profiles) and to connect this information with classical cellular descriptions (such as location and morphology). An open comprehensive reference map of the molecular state of cells in healthy human tissues would propel the systematic study of physiological states, developmental trajectories, regulatory circuitry and interactions of cells, and also provide a framework for understanding cellular dysregulation in human disease. Here we describe the idea, its potential utility, early proofs-of-concept, and some design considerations for the Human Cell Atlas, including a commitment to open data, code, and community.

1,391 citations

Journal Article
J. Walkup1
TL;DR: Development of this more comprehensive model of the behavior of light draws upon the use of tools traditionally available to the electrical engineer, such as linear system theory and the theory of stochastic processes.
Abstract: Course Description This is an advanced course in which we explore the field of Statistical Optics. Topics covered include such subjects as the statistical properties of natural (thermal) and laser light, spatial and temporal coherence, effects of partial coherence on optical imaging instruments, effects on imaging due to randomly inhomogeneous media, and a statistical treatment of the detection of light. Development of this more comprehensive model of the behavior of light draws upon the use of tools traditionally available to the electrical engineer, such as linear system theory and the theory of stochastic processes.

1,364 citations