scispace - formally typeset
Search or ask a question

Showing papers in "Progress in nuclear science and technology in 2011"


Journal ArticleDOI
TL;DR: The Geant4 toolkit as mentioned in this paper is a toolkit for the analysis of the electromagnetic (EM) physics of the standard and low-energy models, including relativistic models for bremsstrahlung and e+e-pair production, models of multiple and single scattering, hadron/ion ionization, microdosimetry for very low energies.
Abstract: An overview of the electromagnetic (EM) physics of the Geant4 toolkit is presented. Two sets of EM models are available: the "Standard" initially focused on high energy physics (HEP) while the "Low-energy" was developed for medical, space and other applications. The "Standard" models provide a faster computation but are less accurate for keV energies, the "Low-energy" models are more CPU time consuming. A common interface to EM physics models has been developed allowing a natural combination of ultra-relativistic, relativistic and low-energy models for the same run providing both precision and CPU performance. Due to this migration additional capabilities become available. The new developments include relativistic models for bremsstrahlung and e+e- pair production, models of multiple and single scattering, hadron/ion ionization, microdosimetry for very low energies and also improvements in existing Geant4 models. In parallel, validation suites and benchmarks have been intensively developed.

98 citations


Journal ArticleDOI
TL;DR: Mathieu KARAMITROS as mentioned in this paper, Alfonso MANTERO, Sebastien INCERTI, Werner FRIEDLAND, Gerard BALDACCHINO, Philippe BARBERET, Mario BERNAL, Riccardo CAPRA, Christophe CHAMPION, Ziad EL BITAR, Paul GUEYE, Anton IVANCHENKO, Vladimir IVanCHENko, Herve SEZNEC, Hoang N. TRAN, Carmen VILLAGRASA 9 and Christina ZACHARATOU 15
Abstract: Mathieu KARAMITROS , Alfonso MANTERO , Sebastien INCERTI , Werner FRIEDLAND , Gerard BALDACCHINO , Philippe BARBERET , Mario BERNAL , Riccardo CAPRA , Christophe CHAMPION , Ziad EL BITAR , Ziad FRANCIS , Paul GUEYE , Anton IVANCHENKO , Vladimir IVANCHENKO , Hisaya KURASHIGE , Barbara MASCIALINO , Philippe MORETTO , Petteri NIEMINEN , Giovanni SANTIN , Herve SEZNEC , Hoang N. TRAN , Carmen VILLAGRASA 9 and Christina ZACHARATOU 15

97 citations


Journal ArticleDOI
TL;DR: Hadrontherapy as mentioned in this paper is a C++, free and open source application developed using the Geant4 Monte Carlo libraries, which allows the simulation of a typical proton/ion transport beam line and the calculation of dose and fluence distributions inside a test phantom.
Abstract: Hadrontherapy is a C++ , free and open source application developed using the Geant4 Monte Carlo libraries. The basic version of Hadrontherapy is contained in the official Geant4 distribution (www.cern.ch/Geant4/download), in- side the category of the advanced examples. This version permits the simulation of a typical proton/ion transport beam line and the calculation of dose and fluence distributions inside a test phantom. A more complete version of the program is separately maintained and released by the authors and it offers a wider set of tools useful for Users interested in proton/ion-therapy studies. It gives the possibility to retrieve ion stopping powers in arbitrary geometrical configuration, to calculate 3D distributions of fluences, dose deposited and LET of primary and of the generated secondary beams, to simulate typical nuclear physics experiments, to interactively switch between different implemented geometries, etc. In this work the main characteristics of the actual full version of Hadrontherapy will be reported and results dis- cussed and compared with the available experimental data. For more information the reader can refer to the Hadrontherapy website.

68 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a review of the hybrid Monte Carlo/deterministic radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses.
Abstract: ** This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10 2-4 ), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.

55 citations


Journal ArticleDOI
TL;DR: A high-performance, highly scalable eigenvalue solver is introduced with the goal of realizing the K-computer system, which is a next-generation supercomputer system.
Abstract: For current supercomputer systems, multicore and multisocket processors are required in order to build a system, and choice of interconnection is essential. In addition, for effective development of new code, high-performance, scalable, and reliable numerical software is key. ScaLAPACK and PETSc are software developed for distributed memory parallel computer systems. Real computation requires software that is highly tuned for implementation on new architectures, such as many-core processors. In the present study, we introduce a high-performance, highly scalable eigenvalue solver with the goal of realizing the K-computer system, which is a next-generation supercomputer system. We have developed two versions of this eigenvalue solver, namely, the standard version (eigen_s) and an enhanced-performance version (eigen_sx), both of which were developed on the T2K cluster system housed at the University of Tokyo. Eigen_s uses conventional algorithms, such as Householder tridiagonalization, the divide and conquer (DC) algorithm, and the Householder backtransformation. These algorithms are carefully implemented using a blocking technique and flexible two-dimensional data-distribution in order to reduce the overhead of memory traffic and data transfer, respectively. Eigen_s performs excellently on the T2K system with 4,096 cores (theoretical peak: 37.6 TFLOPS) and exhibits fine performance (3.0 TFLOPS) with a 200,000-dimensional matrix. The enhanced version, eigen_sx, uses more advanced algorithms, such as the narrow-band reduction algorithm, DC for band matrices, and the block Householder back-transformation with WY- representation. Even though this version is still in the test stage, eigen_sxhas realized 4.7 TFLOPS with a 200,000-dimensional matrix.

54 citations


Journal ArticleDOI
TL;DR: In this article, the influence of misorientation angle and solute elements (carbon and nitrogen) on the cohesive energy of symmetrical tilt grain boundaries (STGBs) in bcc Fe under a gaseous hydrogen environment was investigated.
Abstract: Highly accurate predictions of hydrogen’s influence on material strength and development of materials with minimal hydrogen effects are essential to prevent failure under various hydrogen environments. Here, we investigated the influence of misorientation angle and solute elements (carbon and nitrogen) on the cohesive energy of symmetrical tilt grain boundaries (STGBs) in bcc Fe under a gaseous hydrogen environment by using density functional theory. We found a good correlation among GB energy, GB free volume, and the hydrogen concentration at GBs under hydrogen environments: high-energy GBs have large gaps, and many hydrogen atoms are captured at these spaces. Thus, higher-energy GBs are more influenced by hydrogen. It is also shown that the binding energy between hydrogen and a GB is negligible when nitrogen or carbon atoms exist at the GB at their solubility limit. Therefore, carbon and nitrogen atoms exclude hydrogen atoms from GBs and improve the cohesive energy of GBs under hydrogen environments.

45 citations


Journal ArticleDOI
TL;DR: Geant4 native pre-equilibrium and de-excitation models are used for sampling of residual nucleus fragmentation in a number of generators of inelastic hadron/ion interaction with nuclei including QGS, FTF, Binary Cascade, QMD and some others as discussed by the authors.
Abstract: Geant4 native pre-equilibrium and de-excitation models are used for sampling of residual nucleus fragmentation in a number of generators of inelastic hadron/ion interaction with nuclei including QGS, FTF, Binary Cascade, QMD and some others. The pre-compound model is responsible for pre-equlibrium emission of protons, neutrons and light ions. The de-excitation model provides sampling of evaporation of neutrons, protons and light fragments up to magnesium, Fermi break-up, statistical multifragmentation, fission and photon emission. Recently a review of the pre-equilibrium and de-excitation models of Geant4 has been performed and we report a summary of modifications introduced. The results of the validation versus various published data are presented.

38 citations


Journal ArticleDOI
TL;DR: In this paper, a hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation.
Abstract: ** This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform “real” commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the “gold standard” for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.

37 citations


Journal ArticleDOI
TL;DR: The history of Monte Carlo methods is closely linked to that of computers: the first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computer in the 1990s, and teraflop systems in the 2000s.
Abstract: The history of Monte Carlo methods is closely linked to that of computers: The first known Monte Carlo program was written in 1947 for the ENIAC; a pre-release of the first Fortran compiler was used for Monte Carlo In 1957; Monte Carlo codes were adapted to vector computers in the 1980s, clusters and parallel computers in the 1990s, and teraflop systems in the 2000s. Recent advances include hierarchical parallelism, combining threaded calculations on multicore processors with message-passing among different nodes. With the advances In computmg, Monte Carlo codes have evolved with new capabilities and new ways of use. Production codes such as MCNP, MVP, MONK, TRIPOLI and SCALE are now 20-30 years old (or more) and are very rich in advanced featUres. The former 'method of last resort' has now become the first choice for many applications. Calculations are now routinely performed on office computers, not just on supercomputers. Current research and development efforts are investigating the use of Monte Carlo methods on FPGAs. GPUs, and many-core processors. Other far-reaching research is exploring ways to adapt Monte Carlo methods to future exaflop systems that may have 1M or more concurrent computational processes.

32 citations


Journal ArticleDOI
TL;DR: In this article, a full Monte Carlo method is proposed to perform these calculations on seconds to minutes scale without approximations, and the simulation is now divided in time steps and the ancestor of a particle needs to be tracked throughout the simulation for proper variance estimation.
Abstract: In safety calculations of nuclear reactors, dynamic calculations are of great interest. This type of calculation is mainly done by making an approximation and then using a deterministic code to solve the simplified problem. In this paper a full Monte Carlo method is proposed to perform these calculations on seconds to minutes scale without approximations. First the sampling of precursor decay is needed. The decay of precursors plays a crucial role in the dynamics of nuclear reactors, but the lifetime of a precursor can go up to 10 2 s. This is much larger than the order of the lifetime of a prompt neutron (10 −4 s) or the order of a prompt neutron chain (10 −2 s). Therefore precursors in this simulation are forced to decay according to a modified pdf. This ensures a small variance in the delayed neutron distribution in time. The simulation scheme has also been adapted for dynamic Monte Carlo. The simulation is now divided in time steps and the ancestor of a particle needs to be tracked throughout the simulation for proper variance estimation. After each time step the system properties can be adjusted. Finally the dynamic Monte Carlo method is used to calculate the power production in a simple system. The results agree with a point kinetics calculation of the same problem.

32 citations


Journal ArticleDOI
TL;DR: In this article, the authors reviewed the studies on technique and method of atmospheric dispersion modeling of radioactive material in radiological risk assessment and emergency response, and introduced some current statuses of nuclear accident consequences assessment in China were introduced.
Abstract: The purpose of a consequence assessment system is to assess the consequences of specific hazards on people and the environment. In this paper, the studies on technique and method of atmospheric dispersion modeling of radioactive material in radiological risk assessment and emergency response are reviewed in brief. Some current statuses of nuclear accident consequences assessment in China were introduced. In the future, extending the dispersion modeling scales such as urban building scale, establishing high quality experiment dataset and method of model evaluation, improved methods of real-time modeling using limited inputs, and so on, should be promoted with high priority of doing much more work.


Journal ArticleDOI
TL;DR: The PTSim collaboration in Japan and the TOPAS collaboration in the United States wrap and extend the Geant4 toolkit to meet the needs of this critical community, and how the two projects may evolve together.
Abstract: Though the Geant4 Simulation Toolkit has been widely accepted in the particle therapy community, with research and clinical use at most of the major centers currently involved in this innovative approach to cancer treatment, the high level of Geant4 expertise required for these applications has proven a serious barrier for users. The PTSim collaboration in Japan and the TOPAS collaboration in the United States wrap and extend the Geant4 toolkit to meet the needs of this critical community. PTSim has provided a common platform to model three Japanese proton and ion therapy facilities plus three more in other countries, allowing users who are not Geant4 experts to accurately and efficiently run Geant4 simulations for any of these pre-built configurations. Building on a rich history of proton therapy applications at MGH (site of the world's first proton therapy system), NCC Korea, and elsewhere, the TOPAS project aims to take flexibility further, allowing any particle therapy clinician or researcher to Geant4-simulate their own real or envisioned facility still without requiring a Geant4 expert. We describe these projects, how their designs bridge the gap between flexibility and ease of use, what key missing software components they have contributed and how the two projects may evolve together.



Journal ArticleDOI
TL;DR: In this paper, the authors investigated the light output response of NaI(Tl), CsI(tl), GSO(Ce), and LYSO(ce) crystals for intermediate-energy 4 He, 12 C and 40 Ar beams from HIMAC at National Institute of Radiological Sciences.
Abstract: We carried out experiments to investigate the light output response of NaI(Tl), CsI(Tl), GSO(Ce) and LYSO(Ce) crystals for intermediate-energy 4 He, 12 C and 40 Ar beams from HIMAC at National Institute of Radiological Sciences. And we investigate the light output of these crystals for several-energy gamma-ray for comparison. From these light output responses, we obtained the relationships between the scintillation efficiency (dL/dx) and the specific energy loss (dE/dx) for each crystal. The scintillation efficiency curves of NaI(Tl) and CsI(Tl) crystal have the peak in a particular dE/dx. On the other hand, the scintillation efficiency curves of GSO(Ce) and LYSO(Ce) crystal decrease with increasing dE/dx. The light output curves of these crystals were systematically reproduced using obtained scintillation efficiencies.

Journal ArticleDOI
TL;DR: In this paper, the distribution pattern of thorium, uranium and rare earth elements (REEs) along with REEs in soil samples of weathered acid rock (granite and ryolite) collected from two prefectures of Japan: (1) Kobe city in Hyogo prefecture and (2) Mutsu city and Higashidori village in Aomori prefecture.
Abstract: The geochemical behavior of thorium, uranium and rare earth elements (REEs) are relatively close to one another while compared to other elements in a geological environment. Radioactive elements like Th and U along with their decay products (e.g. Ra) are present in most environmental matrices and can be transferred to living bodies by different pathways which can lead to the sources of exposure to man. For these reasons, it has been necessary to monitor those natural radionuclides in weathered soil samples to assess the possible hazards. It has been observed that granitic rocks contain higher amounts of U, Th and light REEs compared to other igneous rocks such as basalt and andesites. To better understand the interaction between REEs and soils, the nature of soils must be considered. In this paper, we discussed the distribution pattern of Th and U along with REEs in soil samples of weathered acid rock (granite and ryolite) collected from two prefectures of Japan: (1) Kobe city in Hyogo prefecture and (2) Mutsu city and Higashidori village in Aomori prefecture.

Journal ArticleDOI
TL;DR: In this article, the authors investigate a data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node, where each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain.
Abstract: One forthcoming challenge in the area of high-performance computing is having the ability to run large-scale problems while coping with less memory per compute node. In this work, we investigate a novel data decomposition method that would allow Monte Carlo transport calculations to be performed on systems with limited memory per compute node. In this method, each compute node remotely retrieves a small set of geometry and cross-section data as needed and remotely accumulates local tallies when crossing the boundary of the local spatial domain. Initial results demonstrate that while the method does allow large problems to be run in a memory-limited environment, achieving scalability may be difficult due to inefficiencies in the current implementation of RMA operations.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a new fabrication method which has the advantages of process simplicity and also high efficiency, and the alpha particle response of the detector manufactured by the new spreading method was evaluated at varied thicknesses of ZnS(Ag) and the detection efficiency was better than other methods like liquid brush method with an Am-241 alpha radiation source.
Abstract: The silver-activated zinc sulfide, ZnS(Ag), sensor to detect alpha-particles is normally fabricated by means of heat-melting or epoxy mixing spread. However, the fabrication process is very complicated so that it creates high costs and requires special high-tech equipment to manufacture the detector. For this reason, we have developed a new fabrication method which has the advantages of process simplicity and also high efficiency. The alpha particle response of the detector manufactured by the new spreading method was evaluated at varied thicknesses of ZnS(Ag) and the detection efficiency was better than for other methods like liquid brush method with an Am-241 alpha radiation source. Compared to conventional ZnS(Ag) detectors, the new detector shows a good detection efficiency, and its simple and low cost design makes it an economical and commercial alternative to more expensive alpha survey instruments.



Journal ArticleDOI
TL;DR: In this paper, the INCL intra-nuclear cascade and ABLA de-excitation (evaporation/fission) models are used to calculate collisions between projectile particles and nuclei at energies from 0.2 to 3 GeV.
Abstract: We present INCL intra-nuclear cascade and ABLA de-excitation (evaporation/fission) models that can be used to calculate collisions between projectile particles and nuclei at energies from 0.2 to 3 GeV. Supported projectiles are proton, neutron, pions and light ions up to alpha. The optimal target materials range from carbon to uranium. We have extended the INCL model light ion projectile support up to carbon-ions and compared the results against thin and thick target data and Geant4 Binary cascade model. Both thin and first thick target results of the INCL model are in good agreement with experimental data.

Journal ArticleDOI
TL;DR: In this article, a new benchmark for spallation reactions is organized, under the auspices of the IAEA, with a possibility to update the data (experimental and code calculation) via a web site.
Abstract: Spallation reactions play an important role in many branches of physics. If the nuclear reaction induced by low energy neutrons as in nuclear power plants or research reactors can be based on cross section database, it is not the case for spallation reactions due to the energy range, from 100 MeV up to ~3 GeV, and to the candidate targets (all nuclei). Then these reactions have to be modeled. Developments of codes are linked to new experiment and this ex-plains why fifteen years ago a first benchmark has been launched, restricted to particle production, followed by a second one on residues. Since then new data have been measured and computer tools are improved day after day, so a new benchmark has been organized, under the auspices of the IAEA, taken into account all types of results, particle and residue production, with a possibility to update the data (experimental and code calculation) via a web site. Re-sults of the benchmark concerning both the analysis of the prediction capabilities of the seventeen model-participants and the first conclusions on the physics of spallation models are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors calculated the dose conversion coefficients for neutrons and protons based on the ICRP 2007 Recommendations for three additional irradiation geometries: semi-isotropic irradiation from the upper hemisphere, and geometry closely representing the geometrical situations of aircrew exposure for two different flight conditions.
Abstract: The estimation of the fluence-to-effective dose conversion coefficients used especially for aircrew dosimetry is an important issue, since the geometrical conditions of aircrew exposure are considered not to be represented by the standard irradiation geometries such as the anterior-to-posterior (AP) and isotropic (ISO) geometries. We therefore calculated the dose conversion coefficients for neutrons and protons based on the ICRP 2007 Recommendations for three additional irradiation geometries: semi-isotropic irradiation from the upper hemisphere, and geometries closely representing the geometrical situations of aircrew exposure for two different flight conditions. The calculation was performed by the PHITS code coupled with the ICRP/ICRU adult reference computational phantoms. It was found from the calculation that (1) the conversion coefficients for the irradiation geometries closely representing the geometrical situations of aircrew exposure agree better with the corresponding data for the isotropic irradiation than those for the semi-isotropic irradiation; (2) the error associated with the assumption of the isotropic irradiation in the aircrew dosimetry is less than only 1% for conventional flight conditions. These findings indicate the adequacy of the use of dose conversion coefficients for the isotropic irradiation geometry in aircrew dosimetry.

Journal ArticleDOI
TL;DR: In this article, the mass attenuation coefficients of fabricated glass systems at 662 keV have been determined using the narrow beam transmission method, consistent with theoretical calculation from the WinXCom software.
Abstract: Glasses with composition xBaO:(80-x)B2O3:20Flyash (x = 45, 50, 55, 60, 65 and 70 wt.%) have been prepared using the melt quenching method and the mass attenuation coefficients of the fabricated glass system at 662 keV have been determined using the narrow beam transmission method. The results are consistent with theoretical calculation from the WinXCom software. The mean free path (mfp) result showed that the better shielding properties are achieved at higher BaO concentrations. The density measurements are reported as a function of a BaO fraction. These data should be useful for potential applications of flyash in developing of radiation shielding glasses.


Journal ArticleDOI
TL;DR: In this paper, the authors measured the secondary neutrons generated in the linac head, used for radiation therapy at 10 and 15 MV using a water-equivalent phantom, gold foil, and solid state track detectors.
Abstract: 2 In recent radiotherapy techniques, neutrons are generated when the incident photon energy is higher than the thre- shold of (γ, n) reaction. Reactions occur in the various target materials, as well as in the flattening filters and collimators comprising the head structure of the electron linac. The resulting secondary radiation may lead to second- ary cancer in patients because of the increased radiation dose. We measured the secondary neutrons generated in the linac head, used for radiation therapy at 10 and 15 MV using a water-equivalent phantom, gold foil, and solid state track detectors. We calculated the neutron distribution in a model of the same situation using the PHITS code. In Monte Carlo calculations, the scattering of thermal neutrons in the water phantom was confirmed. The contribution from these neutrons was 0.1% or less of that from X-rays at 10 MV. Because the cross-section of the photonuclear reaction increased in the high-energy linac, further examination is required. For 15 MV X-rays, the amount of neu- trons was 10 times higher than for 10 MV X-rays. The PHITS code was created to simulate a photonuclear reaction and is therefore suitable for calculations involving these reactions in radioactive materials.

Journal ArticleDOI
TL;DR: It is shown through numerical examples that elastoplastic dynamic responses can be estimated with good accuracy without resorting to macro models such as those involving plastic hinge and composite beam effects.
Abstract: The results of a high-precision finite element (FE) analysis using the E-Simulator, which is a parallel FE analysis software package for virtual shaking-table tests of civil or architectural structures, are presented for the seismic responses of a 4-story steel frame and a 31-story super-high-rise steel frame The 4-story frame is a specimen of the full-scale total collapse shaking-table test conducted in 2007 at the Hyogo Earthquake Engineering Research Center of the National Research Institute of Earth Science and Disaster Prevention, Japan These steel frames are modeled by meshes of hexahedral solid elements Large strain elastoplasticity is considered in the analyses It is shown through numerical examples that elastoplastic dynamic responses can be estimated with good accuracy without resorting to macro models such as those involving plastic hinge and composite beam effects

Journal ArticleDOI
TL;DR: In this article, a simplified 3D model of the RBMK-1500 reactor core fragment with 14 fuel assemblies and 2 control rods distributed according to the real RCN core geometry has been created.
Abstract: Numerical modelling is an important tool for estimation of the radioactive waste generation at the nuclear energy sites. In this work we have applied a Monte Carlo code, MCNPX version 2.6 for calculation of activation of the graphite stack in the RBMK-1500 reactor. The simplified 3D model of the RBMK-1500 reactor core fragment with 14 fuel assemblies and 2 control rods distributed according to the real RBMK-1500 reactor core geometry has been created. Use of the Monte Carlo method for estimation of impurity activation in the reactor core faces a problem of large uncertainties of effective cross-sections if a calculation set is insufficient. Satisfactory results can be obtained by using parallel computing algorithms. The ICP-MS mass spectrometry for identification of the impurity concentration in virgin graphite from the RBMK-1500 reactor has been performed to support MCNPX modelling of the realistic neutron irradiation conditions of the graphite. Simulated radiological characteristics of the graphite have been compared with the previous calculations made with different impurity concentrations obtained by neutron activation analysis and GDMS. The analysis of radioactive impurity content shows that C, Co, Fe, Pu, Am and Cm make the major contribution to graphite activity and radiotoxicity during hundreds of years. The obtained results are important for decommissioning of the Ignalina NPP and other NPP with RBMK type reactors.