scispace - formally typeset
Journal ArticleDOI

Progress Variable Variance and Filtered Rate Modelling Using Convolutional Neural Networks and Flamelet Methods

Reads0
Chats0
TLDR
A purely data-driven modelling approach using deep convolutional neural networks is discussed in the context of Large Eddy Simulation (LES) of turbulent premixed flames, demonstrating with success for both the sub-grid scale progress variable variance and the filtered reaction rate.
Abstract
A purely data-driven modelling approach using deep convolutional neural networks is discussed in the context of Large Eddy Simulation (LES) of turbulent premixed flames. The assessment of the method is conducted a priori using direct numerical simulation data. The network has been trained to perform deconvolution on the filtered density and the filtered density-progress variable product, and by doing so obtain estimates of the un-filtered progress variable field. A filtered function of the progress variable can then be approximated on the LES mesh using the deconvoluted field. This new strategy for tackling turbulent combustion modelling is demonstrated with success for both the sub-grid scale progress variable variance and the filtered reaction rate, using flamelet methods, two fundamental ingredients of premixed turbulent combustion modelling.

read more

Citations
More filters
Journal ArticleDOI

Chemistry reduction using machine learning trained from non-premixed micro-mixing modeling: Application to DNS of a syngas turbulent oxy-flame with side-wall effects

TL;DR: In this article, a chemistry reduction approach based on machine learning is proposed and applied to direct numerical simulation (DNS) of a turbulent non-premixed syngas oxy-flame interacting with a cooled wall.
Journal ArticleDOI

Combustion machine learning: Principles, progress and prospects

TL;DR: A review of data sources, data-driven techniques, and concepts for combustion machine learning can be found in this article , focusing on interpretability, uncertainty quantification, robustness, consistency, creation and curation of benchmark data, and the augmentation of ML methods with prior combustion domain knowledge.
Journal ArticleDOI

A priori analysis on deep learning of subgrid-scale parameterizations for Kraichnan turbulence

TL;DR: Different data-driven parameterizations for large eddy simulation of two-dimensional turbulence in the a priori settings are investigated and computational gain can be achieved using the intelligent eddy viscosity model that learns eddy Viscosity computed by the DSM instead of subgrid-scale stresses.
Journal ArticleDOI

Sensing the turbulent large-scale motions with their wall signature

TL;DR: In this paper, the authors assess the capability of extended proper orthogonal decomposition (EPOD) and convolutional neural networks (CNNs) to reconstruct large-scale and very-large-scale motions (LSMs and VLSMs respectively) employing wall-shear-stress measurements in wall-bounded turbulent flows.

Machine learning for combustion

TL;DR: In this article, the authors present an overview of studies on the applications of machine learning in combustion science fields over the past several decades, including chemical reactions, combustion modeling, combustion measurement, engine performance prediction and optimization, and fuel design.
References
More filters
Journal ArticleDOI

Mastering the game of Go with deep neural networks and tree search

TL;DR: Using this search algorithm, the program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0.5, the first time that a computer program has defeated a human professional player in the full-sized game of Go.
Journal ArticleDOI

General circulation experiments with the primitive equations

TL;DR: In this article, an extended period numerical integration of a baroclinic primitive equation model has been made for the simulation and the study of the dynamics of the atmosphere's general circulation, and the solution corresponding to external gravitational propagation is filtered by requiring the vertically integrated divergence to vanish identically.
Proceedings Article

Sequence to Sequence Learning with Neural Networks

TL;DR: The authors used a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector.
Posted Content

Sequence to Sequence Learning with Neural Networks

TL;DR: This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.
MonographDOI

Turbulent Flows: FUNDAMENTALS

Related Papers (5)