scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Progress Variable Variance and Filtered Rate Modelling Using Convolutional Neural Networks and Flamelet Methods

TL;DR: A purely data-driven modelling approach using deep convolutional neural networks is discussed in the context of Large Eddy Simulation (LES) of turbulent premixed flames, demonstrating with success for both the sub-grid scale progress variable variance and the filtered reaction rate.
Abstract: A purely data-driven modelling approach using deep convolutional neural networks is discussed in the context of Large Eddy Simulation (LES) of turbulent premixed flames. The assessment of the method is conducted a priori using direct numerical simulation data. The network has been trained to perform deconvolution on the filtered density and the filtered density-progress variable product, and by doing so obtain estimates of the un-filtered progress variable field. A filtered function of the progress variable can then be approximated on the LES mesh using the deconvoluted field. This new strategy for tackling turbulent combustion modelling is demonstrated with success for both the sub-grid scale progress variable variance and the filtered reaction rate, using flamelet methods, two fundamental ingredients of premixed turbulent combustion modelling.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, a chemistry reduction approach based on machine learning is proposed and applied to direct numerical simulation (DNS) of a turbulent non-premixed syngas oxy-flame interacting with a cooled wall.

56 citations

Journal ArticleDOI
TL;DR: A review of data sources, data-driven techniques, and concepts for combustion machine learning can be found in this article , focusing on interpretability, uncertainty quantification, robustness, consistency, creation and curation of benchmark data, and the augmentation of ML methods with prior combustion domain knowledge.

47 citations

Journal ArticleDOI
TL;DR: Different data-driven parameterizations for large eddy simulation of two-dimensional turbulence in the a priori settings are investigated and computational gain can be achieved using the intelligent eddy viscosity model that learns eddy Viscosity computed by the DSM instead of subgrid-scale stresses.
Abstract: In the present study, we investigate different data-driven parameterizations for large eddy simulation of two-dimensional turbulence in the a priori settings. These models utilize resolved flow field variables on the coarser grid to estimate the subgrid-scale stresses. We use data-driven closure models based on localized learning that employs a multilayer feedforward artificial neural network with point-to-point mapping and neighboring stencil data mapping, and convolutional neural network fed by data snapshots of the whole domain. The performance of these data-driven closure models is measured through a probability density function and is compared with the dynamic Smagorinsky model (DSM). The quantitative performance is evaluated using the cross-correlation coefficient between the true and predicted stresses. We analyze different frameworks in terms of the amount of training data, selection of input and output features, their characteristics in modeling with accuracy, and training and deployment computational time. We also demonstrate computational gain that can be achieved using the intelligent eddy viscosity model that learns eddy viscosity computed by the DSM instead of subgrid-scale stresses. We detail the hyperparameters optimization of these models using the grid search algorithm.

44 citations

Journal ArticleDOI
TL;DR: In this paper, the authors assess the capability of extended proper orthogonal decomposition (EPOD) and convolutional neural networks (CNNs) to reconstruct large-scale and very-large-scale motions (LSMs and VLSMs respectively) employing wall-shear-stress measurements in wall-bounded turbulent flows.
Abstract: This study assesses the capability of extended proper orthogonal decomposition (EPOD) and convolutional neural networks (CNNs) to reconstruct large-scale and very-large-scale motions (LSMs and VLSMs respectively) employing wall-shear-stress measurements in wall-bounded turbulent flows. Both techniques are used to reconstruct the instantaneous LSM evolution in the flow field as a combination of proper orthogonal decomposition (POD) modes, employing a limited set of instantaneous wall-shear-stress measurements. Due to the dominance of nonlinear effects, only CNNs provide satisfying results. Being able to account for nonlinearities in the flow, CNNs are shown to perform significantly better than EPOD in terms of both instantaneous flow-field estimation and turbulent-statistics reconstruction. CNNs are able to provide a more effective reconstruction performance employing more POD modes at larger distances from the wall and employing lower wall-measurement resolutions. Furthermore, the capability of tackling nonlinear features of CNNs results in estimation capabilities that are weakly dependent on the distance from the wall.

33 citations

DOI
24 Nov 2021
TL;DR: In this article, the authors present an overview of studies on the applications of machine learning in combustion science fields over the past several decades, including chemical reactions, combustion modeling, combustion measurement, engine performance prediction and optimization, and fuel design.
Abstract: Combustion science is an interdisciplinary study that involves nonlinear physical and chemical phenomena in time and length scales, including complex chemical reactions and fluid flows. Combustion widely supplies energy for powering vehicles, heating houses, generating electricity, cooking food, etc. The key to studying combustion is to improve the combustion efficiency with minimum emission of pollutants. Machine learning facilitates data-driven techniques for handling large amounts of combustion data, either through experiments or simulations under multiple spatiotemporal scales, thereby finding the hidden patterns underlying these data and promoting combustion research. This work presents an overview of studies on the applications of machine learning in combustion science fields over the past several decades. We introduce the fundamentals of machine learning and its usage in aiding chemical reactions, combustion modeling, combustion measurement, engine performance prediction and optimization, and fuel design. The opportunities and limitations of using machine learning in combustion studies are also discussed. This paper aims to provide readers with a portrait of what and how machine learning can be used in combustion research and to inspire researchers in their ongoing studies. Machine learning techniques are rapidly advancing in this era of big data, and there is high potential for exploring the combination between machine learning and combustion research and achieving remarkable results.

33 citations

References
More filters
Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
28 May 2015-Nature
TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Abstract: Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

46,982 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
26 Feb 2015-Nature
TL;DR: This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Abstract: The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.

23,074 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations