Nuclear energy system’s behavior and decision making using machine learning
TL;DR: In this paper, the authors used a neural network to predict the behavior of a small light water Reactor (SWR) during various core power inputs and a loss of flow accident.
Abstract: Early versions of artificial neural networks’ ability to learn from data based on multivariable statistics and optimization demanded high computational performance as multiple training iterations are necessary to find an optimal local minimum. The rapid advancements in computational performance, storage capacity, and big data management have allowed machine-learning techniques to improve in the areas of learning speed, non-linear data handling, and complex features identification. Machine-learning techniques have proven successful and been used in the areas of autonomous machines, speech recognition, and natural language processing. Though the application of artificial intelligence in the nuclear engineering domain has been limited, it has accurately predicted desired outcomes in some instances and has proven to be a worthwhile area of research. The objectives of this study are to create neural networks topologies to use Oregon State University’s Multi-Application Small Light Water Reactor integrated test facility’s data and evaluate its capability of predicting the systems behavior during various core power inputs and a loss of flow accident. This study uses data from multiple sensors, focusing primarily on the reactor pressure vessel and its internal components. As a result, the artificial neural networks are able to predict the behavior of the system with good accuracy in each scenario. Its ability to provide technical data can help decision makers to take actions more rapidly, identify safety issues, or provide an intelligent system with the potential of using pattern recognition for reactor accident identification and classification. Overall, the development and application of neural networks can be promising in the nuclear industry and any product processes that can benefit from utilizing a quick data analysis tool.
Citations
More filters
TL;DR: An overview of the fundamentals of artificial intelligence and the state of development of learning-based methods in nuclear science and engineering is presented to identify the risks and opportunities of applying such methods to nuclear applications.
Abstract: Nuclear technology industries have increased their interest in using data-driven methods to improve safety, reliability, and availability of assets. To do so, it is important to understand the fundamentals between the disciplines to effectively develop and deploy such systems. This survey presents an overview of the fundamentals of artificial intelligence and the state of development of learning-based methods in nuclear science and engineering to identify the risks and opportunities of applying such methods to nuclear applications. This paper focuses on applications related to three key subareas related to safety and decision-making. These are reactor health and monitoring, radiation detection, and optimization. The principles of learning-based methods in these applications are explained and recent studies are explored. Furthermore, as these methods have become more practical during the past decade, it is foreseen that the popularity of learning-based methods in nuclear science and technology will increase; consequently, understanding the benefits and barriers of implementing such methodologies can help create better research plans, and identify project risks and opportunities.
63 citations
01 Oct 2018
TL;DR: A Zero-Biased MNU-Aware SRAM Cell (ZBMA) is proposed for DNN accelerators based on two observations: the data in DNNs has a strong bias towards zero, and data flipping from zero to one is more likely to cause a failure of DNN outputs.
Abstract: Deep learning neural network (DNN) accelerators have been increasingly deployed in many fields recently, including safety-critical applications such as autonomous vehicles and unmanned aircrafts. Meanwhile, the vulnerability of DNN accelerators to soft errors (e.g., caused by high-energy particle strikes) rapidly increases as manufacturing technology continues to scale down. A failure in the operation of DNN accelerators may lead to catastrophic consequences. Among the existing reliability techniques that can be applied to DNN accelerators, fully-hardened SRAM cells are more attractive due to their low overhead in terms of area, power and delay. However, current fully-hardened SRAM cells can only tolerate soft errors produced by single-node-upsets (SNUs), and cannot fully resist the soft errors caused by multiple-node-upsets (MNUs). In this paper, a Zero-Biased MNU-Aware SRAM Cell (ZBMA) is proposed for DNN accelerators based on two observations: first, the data (feature maps, weights) in DNNs has a strong bias towards zero; second, data flipping from zero to one is more likely to cause a failure of DNN outputs. The proposed memory cell provides a robust immunity against node upsets, and reduces the leakage current dramatically when zero is stored in the cell. Evaluation results show that when the proposed memory cell is integrated in a DNN accelerator, the total static power of the accelerator is reduced by 2.6X and 1.79X compared with the one based on the conventional and on state-of-the-art full-hardened memory cells, respectively. In terms of reliability, the DNN accelerator based on the proposed memory cell can reduce 99.99% of false outputs caused by soft errors across different DNNs.
49 citations
Cites background from "Nuclear energy system’s behavior an..."
...Deep Learning neural networks (DNNs) are widely deployed in many fields [17]–[20] due to the achieved high accuracy....
[...]
TL;DR: In this paper, a feed-forward backpropagation artificial neural network (ANN) model was trained to simulate the interaction between the reactor core and the primary and secondary coolant systems in a pressurized water reactor.
Abstract: A Nuclear Power Plant (NPP) is a complex dynamic system-of-systems with highly nonlinear behaviors. In order to control the plant operation under both normal and abnormal conditions, the different systems in NPPs (e.g., the reactor core components, primary and secondary coolant systems) are usually monitored continuously, resulting in very large amounts of data. This situation makes it possible to integrate relevant qualitative and quantitative knowledge with artificial intelligence techniques to provide faster and more accurate behavior predictions, leading to more rapid decisions, based on actual NPP operation data. Data-driven models (DDM) rely on artificial intelligence to learn autonomously based on patterns in data, and they represent alternatives to physics-based models that typically require significant computational resources and might not fully represent the actual operation conditions of an NPP. In this study, a feed-forward backpropagation artificial neural network (ANN) model was trained to simulate the interaction between the reactor core and the primary and secondary coolant systems in a pressurized water reactor. The transients used for model training included perturbations in reactivity, steam valve coefficient, reactor core inlet temperature, and steam generator inlet temperature. Uncertainties of the plant physical parameters and operating conditions were also incorporated in these transients. Eight training functions were adopted during the training stage to develop the most efficient network. The developed ANN model predictions were subsequently tested successfully considering different new transients. Overall, through prompt prediction of NPP behavior under different transients, the study aims at demonstrating the potential of artificial intelligence to empower rapid emergency response planning and risk mitigation strategies.
29 citations
TL;DR: This paper comprehensively analyses recent advancement in artificial intelligence for its applications in nuclear power industry and provides a critical assessment of various nuances of artificial Intelligence for nuclear industry.
Abstract: Nuclear industry is in crisis and innovation is the central theme of its survival in future. Artificial intelligence has made a quantum leap in last few years. This paper comprehensively analyses recent advancement in artificial intelligence for its applications in nuclear power industry. A brief background of machine learning techniques researched and proposed in this domain is outlined. A critical assessment of various nuances of artificial intelligence for nuclear industry is provided. Lack of operational data from real power plant especially for transients and accident scenario is a major concern regarding the accuracy of intelligent systems. There is no universally agreed opinion among researchers for selecting the best artificial intelligence techniques for a specific purpose as intelligent systems developed by various researchers are based on different data set. Interlaboratory work frame or round-robin programme to develop the artificial intelligent tool for any specific purpose, based on the same data base, can be crucial in claiming the accuracy and thus the best technique. The black box nature of artificial techniques also poses a serious challenge for its implementation in nuclear industry, as it makes them prone to fooling.
25 citations
TL;DR: This paper presents the design status, innovative features and characteristics of iPWR-type SMRs, delineate the common technology trends, and highlight the key features of each design.
Abstract: In recent years, the trend in small modular reactor (SMR) technology development has been towards the water-cooled integral pressurized water reactor (iPWR) type. The innovative and unique characteristics of iPWR-type SMRs provide an enhanced safety margin, and thus offer the potential to expand the use of safe, clean, and reliable nuclear energy to a broad range of energy applications. Currently in the world, there are about eleven (11) iPWR-type SMRs concepts and designs that are in various phases of development: under construction, licensed or in the licensing review process, the development phase, and conceptual design phase. Lack of national and/or internatonal comparative framework for safety in SMR design, as well as the proprietary nature of designs introduces non-uniformity and uncertainties in regulatory review. That said, the major primary reactor coolant system components, such as the steam generator (SG), pressurizer (PRZ), and control rod drive mechanism (CRDM) are integrated within the reactor pressure vessel (RPV) to inherently eliminate or minimize potential accident initiators, such as LB-loss of coolant accidents (LOCAs). This paper presents the design status, innovative features and characteristics of iPWR-type SMRs. We delineate the common technology trends, and highlight the key features of each design. These reactor concepts exploit natural physical laws such as gravity to achieve the safety functions with high level of margin and reliability. In fact, many SMR designs employ passive safety systems (PSS) to meet the evolving stringent regulatory requirements, and the extended consideration for severe accidents. A generic classification of PSS is provided. We constrain our discussion to the decay heat removal system, safety injection system, reactor depressurization system, and containment system. A review and comparative assessment of these passive features in each iPWR-type SMR design is considered, and we underline how it maybe more advantageous to employ passive systems in SMRs in contrast to conventional reactor designs.
15 citations
References
More filters
Book•
[...]
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
38,208 citations
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract: We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.
23,814 citations
Book•
01 Jan 2020TL;DR: In this article, the authors present a comprehensive introduction to the theory and practice of artificial intelligence for modern applications, including game playing, planning and acting, and reinforcement learning with neural networks.
Abstract: The long-anticipated revision of this #1 selling book offers the most comprehensive, state of the art introduction to the theory and practice of artificial intelligence for modern applications. Intelligent Agents. Solving Problems by Searching. Informed Search Methods. Game Playing. Agents that Reason Logically. First-order Logic. Building a Knowledge Base. Inference in First-Order Logic. Logical Reasoning Systems. Practical Planning. Planning and Acting. Uncertainty. Probabilistic Reasoning Systems. Making Simple Decisions. Making Complex Decisions. Learning from Observations. Learning with Neural Networks. Reinforcement Learning. Knowledge in Learning. Agents that Communicate. Practical Communication in English. Perception. Robotics. For computer professionals, linguists, and cognitive scientists interested in artificial intelligence.
16,983 citations
TL;DR: In this article, it is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under another and gives the same results, although perhaps not in the same time.
Abstract: Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed.
14,937 citations
TL;DR: This paper demonstrates how constraints from the task domain can be integrated into a backpropagation network through the architecture of the network, successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service.
Abstract: The ability of learning networks to generalize can be greatly enhanced by providing constraints from the task domain. This paper demonstrates how such constraints can be integrated into a backpropagation network through the architecture of the network. This approach has been successfully applied to the recognition of handwritten zip code digits provided by the U.S. Postal Service. A single network learns the entire recognition operation, going from the normalized image of the character to the final classification.
9,775 citations