scispace - formally typeset
Search or ask a question
Journal ArticleDOI

The Atomic Nucleus

01 May 1957-Nature (Nature Publishing Group)-Vol. 179, Iss: 4569, pp 1040-1041
TL;DR: In this article, Bethe and Morrison present a theory of elementary nuclear theory, which they call the "elementary nuclear theory" (ENTT), based on the concept of the atom.
Abstract: Elementary Nuclear Theory By Prof. Hans A. Bethe and Prof. Philip Morrison. Second edition. Pp. xi + 274. (New York: John Wiley and Sons, Inc.; London: Chapman and Hall, Ltd., 1956.) 50s. net.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this paper, the primordial abundances of the hadronic decay modes of X were derived using the JETSET 7.4 Monte Carlo event generator, which is used to calculate the spectrum of hadrons produced by the decay of X. In order to estimate the uncertainties, the Monte Carlo simulation which includes the experimental errors of the cross sections and transfered energies.
Abstract: We study the big-bang nucleosynthesis (BBN) with the long-lived exotic particle, called X. If the lifetime of X is longer than \sim 0.1 sec, its decay may cause non-thermal nuclear reactions during or after the BBN, altering the predictions of the standard BBN scenario. We pay particular attention to its hadronic decay modes and calculate the primordial abundances of the light elements. Using the result, we derive constraints on the primordial abundance of X. Compared to the previous studies, we have improved the following points in our analysis: The JETSET 7.4 Monte Carlo event generator is used to calculate the spectrum of hadrons produced by the decay of X; The evolution of the hadronic shower is studied taking account of the details of the energy-loss processes of the nuclei in the thermal bath; We have used the most recent observational constraints on the primordial abundances of the light elements; In order to estimate the uncertainties, we have performed the Monte Carlo simulation which includes the experimental errors of the cross sections and transfered energies. We will see that the non-thermal productions of D, He3, He4 and Li6 provide stringent upper bounds on the primordial abundance of late-decaying particle, in particular when the hadronic branching ratio of X is sizable. We apply our results to the gravitino problem, and obtain upper bound on the reheating temperature after inflation.

840 citations


Additional excerpts

  • ...Reaction Error Reference γ + D → n + p 6 % [61] γ + T → n + D 14% [62, 63] γ + T → p+ n + n 7% [63] γ + (3)He → p+ D 10% [64] γ + (3)He → p+ p+ n 15% [64] γ + (4)He → p+ T 4% [65] γ + (4)He → n+ (3)He 5% [66, 67] γ + (4)He → p+ n+ D 14% [65] γ + (6)Li → anything 4% [68] γ + (7)Li → n + (6)Li 4% [69] γ + (7)Li → anything 9% [70] γ + (7)Be → p+ (6)Li γ + (7)Be → anything...

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors revisited the upper limits on the abundance of unstable massive relic particles provided by the success of Big-Bang Nucleosynthesis calculations and derived analytic approximations that complement and check the full numerical calculations.
Abstract: We revisit the upper limits on the abundance of unstable massive relic particles provided by the success of Big-Bang Nucleosynthesis calculations. We use the cosmic microwave background data to constrain the baryon-to-photon ratio, and incorporate an extensively updated compilation of cross sections into a new calculation of the network of reactions induced by electromagnetic showers that create and destroy the light elements deuterium, 3 He, 4 He, 6 Li and 7 Li. We derive analytic approximations that complement and check the full numerical calculations. Considerations of the abundances of 4 He and 6 Li exclude exceptional regions of parameter space that would otherwise have been permitted by deuterium alone. We illustrate our results by applying them to massive gravitinos. If they weigh 100 GeV, their primordial abundance should have been below about 10 13 of the total entropy. This would imply an upper limit on the reheating temperature of a few times 10 7 GeV, which could be a potential diculty for some models of inflation. We discuss possible ways of evading this problem.

373 citations


Additional excerpts

  • ...d a power law in photon energy above threshold, E γ − |Q|. We have found that expressions of this type provide a simple but accurate representation of the data. 1. d(γ,n)p E γ,th = |Q| = 2.224573 MeV [74]. σ(E γ) = 18.75mb q |Q|(E γ − |Q|) E 3 +0.007947 q |Q|(E γ −|Q|) E 2q |Q|− √ 0.037 2 E γ − (|Q|−0.037) 2. t(γ,n)d E γ,th = |Q| = 6.257248 MeV [75, 76]. σ(E γ) = 9.8mb |Q...

    [...]

01 Feb 1992
TL;DR: An ongoing program on microfabricated field-emitter arrays has produced a gated field emitter tip structure with submicrometer dimensions and techniques for fabricating emitter arrays with tip packaging densities of up to 1.5*10/sup 7/ tips/cm/sup 2/ as mentioned in this paper.
Abstract: An ongoing program on microfabricated field-emitter arrays has produced a gated field-emitter tip structure with submicrometer dimensions and techniques for fabricating emitter arrays with tip packaging densities of up to 1.5*10/sup 7/ tips/cm/sup 2/. Arrays have been fabricated over areas varying from a few micrometers up to 13 cm in diameter. Very small overall emitter size, materials selection, and rigorous emitter-tip processing procedures have contributed to reducing the potential required for field emission to tens of volts. Emission current densities of up to 100 A/cm/sup 2/ have been achieved with small arrays of tips, and 100-mA total emission is commonly produced with arrays 1 mm in diameter containing 10000 tips. Transconductances of 5.0 mu S per tip have been demonstrated, indicating that 50 S/cm/sup 2/ should be achievable with tip densities of 10/sup 7/ tips/cm/sup 2/. Details of the cathode arrays and a variety of performance characteristics are discussed. >

314 citations

Journal ArticleDOI
TL;DR: The aim of probabilistic explanation is not to demonstrate that the explanandum fact was nomically expectable, but to give an account of the chance mechanism(s) responsible for it as mentioned in this paper.
Abstract: It has been the dominant view that probabilistic explanations of particular facts must be inductive in character. I argue here that this view is mistaken, and that the aim of probabilistic explanation is not to demonstrate that the explanandum fact was nomically expectable, but to give an account of the chance mechanism(s) responsible for it. To this end, a deductive-nomological model of probabilistic explanation is developed and defended. Such a model has application only when the probabilities occurring in covering laws can be interpreted as measures of objective chance, expressing the strength of physical propensities. Unlike inductive models of probabilistic explanation, this deductive model stands in no need of troublesome requirements of maximal specificity or epistemic relativization.

196 citations


Cites background from "The Atomic Nucleus"

  • ...Thus a transmission coefficient for U238 alpha-particles is determined, which, given certain simplifying assumptions about the goings-on inside the nucleus, yields the probability that such a particle will tunnel out of the potential well "per unit time for one nucleus," namely, X238 ([ 1] , p....

    [...]

01 Dec 1979
TL;DR: In this paper, the three basic elements of stratospheric science-laboratory measurements, atmospheric observations, and theoretical studies are presented along with an attempt to predict, with reasonable confidence, the effect on ozone of particular anthropogenic sources of pollution.
Abstract: The present status of stratospheric science is discussed. The three basic elements of stratospheric science-laboratory measurements, atmospheric observations, and theoretical studies are presented along with an attempt to predict, with reasonable confidence, the effect on ozone of particular anthropogenic sources of pollution.

166 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, the primordial abundances of the hadronic decay modes of X were derived using the JETSET 7.4 Monte Carlo event generator, which is used to calculate the spectrum of hadrons produced by the decay of X. In order to estimate the uncertainties, the Monte Carlo simulation which includes the experimental errors of the cross sections and transfered energies.
Abstract: We study the big-bang nucleosynthesis (BBN) with the long-lived exotic particle, called X. If the lifetime of X is longer than \sim 0.1 sec, its decay may cause non-thermal nuclear reactions during or after the BBN, altering the predictions of the standard BBN scenario. We pay particular attention to its hadronic decay modes and calculate the primordial abundances of the light elements. Using the result, we derive constraints on the primordial abundance of X. Compared to the previous studies, we have improved the following points in our analysis: The JETSET 7.4 Monte Carlo event generator is used to calculate the spectrum of hadrons produced by the decay of X; The evolution of the hadronic shower is studied taking account of the details of the energy-loss processes of the nuclei in the thermal bath; We have used the most recent observational constraints on the primordial abundances of the light elements; In order to estimate the uncertainties, we have performed the Monte Carlo simulation which includes the experimental errors of the cross sections and transfered energies. We will see that the non-thermal productions of D, He3, He4 and Li6 provide stringent upper bounds on the primordial abundance of late-decaying particle, in particular when the hadronic branching ratio of X is sizable. We apply our results to the gravitino problem, and obtain upper bound on the reheating temperature after inflation.

840 citations

Journal ArticleDOI
TL;DR: In this article, the authors revisited the upper limits on the abundance of unstable massive relic particles provided by the success of Big-Bang Nucleosynthesis calculations and derived analytic approximations that complement and check the full numerical calculations.
Abstract: We revisit the upper limits on the abundance of unstable massive relic particles provided by the success of Big-Bang Nucleosynthesis calculations. We use the cosmic microwave background data to constrain the baryon-to-photon ratio, and incorporate an extensively updated compilation of cross sections into a new calculation of the network of reactions induced by electromagnetic showers that create and destroy the light elements deuterium, 3 He, 4 He, 6 Li and 7 Li. We derive analytic approximations that complement and check the full numerical calculations. Considerations of the abundances of 4 He and 6 Li exclude exceptional regions of parameter space that would otherwise have been permitted by deuterium alone. We illustrate our results by applying them to massive gravitinos. If they weigh 100 GeV, their primordial abundance should have been below about 10 13 of the total entropy. This would imply an upper limit on the reheating temperature of a few times 10 7 GeV, which could be a potential diculty for some models of inflation. We discuss possible ways of evading this problem.

373 citations

01 Feb 1992
TL;DR: An ongoing program on microfabricated field-emitter arrays has produced a gated field emitter tip structure with submicrometer dimensions and techniques for fabricating emitter arrays with tip packaging densities of up to 1.5*10/sup 7/ tips/cm/sup 2/ as mentioned in this paper.
Abstract: An ongoing program on microfabricated field-emitter arrays has produced a gated field-emitter tip structure with submicrometer dimensions and techniques for fabricating emitter arrays with tip packaging densities of up to 1.5*10/sup 7/ tips/cm/sup 2/. Arrays have been fabricated over areas varying from a few micrometers up to 13 cm in diameter. Very small overall emitter size, materials selection, and rigorous emitter-tip processing procedures have contributed to reducing the potential required for field emission to tens of volts. Emission current densities of up to 100 A/cm/sup 2/ have been achieved with small arrays of tips, and 100-mA total emission is commonly produced with arrays 1 mm in diameter containing 10000 tips. Transconductances of 5.0 mu S per tip have been demonstrated, indicating that 50 S/cm/sup 2/ should be achievable with tip densities of 10/sup 7/ tips/cm/sup 2/. Details of the cathode arrays and a variety of performance characteristics are discussed. >

314 citations

Journal ArticleDOI
TL;DR: The aim of probabilistic explanation is not to demonstrate that the explanandum fact was nomically expectable, but to give an account of the chance mechanism(s) responsible for it as mentioned in this paper.
Abstract: It has been the dominant view that probabilistic explanations of particular facts must be inductive in character. I argue here that this view is mistaken, and that the aim of probabilistic explanation is not to demonstrate that the explanandum fact was nomically expectable, but to give an account of the chance mechanism(s) responsible for it. To this end, a deductive-nomological model of probabilistic explanation is developed and defended. Such a model has application only when the probabilities occurring in covering laws can be interpreted as measures of objective chance, expressing the strength of physical propensities. Unlike inductive models of probabilistic explanation, this deductive model stands in no need of troublesome requirements of maximal specificity or epistemic relativization.

196 citations

Journal ArticleDOI
TL;DR: In this article, a unified computational framework that can be used to describe impulsive flares on the Sun and on dMe stars is presented. But the model assumes that the flare impulsive phase is caused by a beam of charged particles that is accelerated in the corona and propagates downward depositing energy and momentum along the way.
Abstract: We present a unified computational framework that can be used to describe impulsive flares on the Sun and on dMe stars. The models assume that the flare impulsive phase is caused by a beam of charged particles that is accelerated in the corona and propagates downward depositing energy and momentum along the way. This rapidly heats the lower stellar atmosphere causing it to explosively expand and dramatically brighten. Our models consist of flux tubes that extend from the sub-photosphere into the corona. We simulate how flare-accelerated charged particles propagate down one-dimensional flux tubes and heat the stellar atmosphere using the Fokker-Planck kinetic theory. Detailed radiative transfer is included so that model predictions can be directly compared with observations. The flux of flare-accelerated particles drives return currents which additionally heat the stellar atmosphere. These effects are also included in our models. We examine the impact of the flare-accelerated particle beams on model solar and dMe stellar atmospheres and perform parameter studies varying the injected particle energy spectra. We find the atmospheric response is strongly dependent on the accelerated particle cutoff energy and spectral index.

175 citations