scispace - formally typeset
Search or ask a question
Posted Content

Multi-fidelity information fusion with concatenated neural networks

TL;DR: In this article, a concatenated neural network approach is proposed to combine the self-similarity solution and power-law velocity profile with the noisy data obtained either from experiments or computational fluid dynamics simulations (high-fidelity models).
Abstract: Recently, computational modeling has shifted towards the use of deep learning, and other data-driven modeling frameworks. Although this shift in modeling holds promise in many applications like design optimization and real-time control by lowering the computational burden, training deep learning models needs a huge amount of data. This big data is not always available for scientific problems and leads to poorly generalizable data-driven models. This gap can be furnished by leveraging information from physics-based models. Exploiting prior knowledge about the problem at hand, this study puts forth a concatenated neural network approach to build more tailored, effective, and efficient machine learning models. For our analysis, without losing its generalizability and modularity, we focus on the development of predictive models for laminar and turbulent boundary layer flows. In particular, we combine the self-similarity solution and power-law velocity profile (low-fidelity models) with the noisy data obtained either from experiments or computational fluid dynamics simulations (high-fidelity models) through a concatenated neural network. We illustrate how the knowledge from these simplified models results in reducing uncertainties associated with deep learning models. The proposed framework produces physically consistent models that attempt to achieve better generalization than data-driven models obtained purely based on data. While we demonstrate our framework for a problem relevant to fluid mechanics, its workflow and principles can be adopted for many scientific problems where empirical models are prevalent. In line with grand demands in novel physics-guided machine learning principles, this work builds a bridge between extensive physics-based theories and data-driven modeling paradigms and paves the way for using hybrid modeling approaches for next-generation digital twin technologies.
Citations
More filters
Journal ArticleDOI
TL;DR: A fusion verification method that combines traffic detection with XSS payload detection, using machine learning to detect XSS attacks, is proposed, and seven new payload features are proposed to improve detection efficiency.
Abstract: The frequent variations of XSS (cross-site scripting) payloads make static and dynamic analysis difficult to detect effectively. In this paper, we proposed a fusion verification method that combines traffic detection with XSS payload detection, using machine learning to detect XSS attacks. In addition, we also proposed seven new payload features to improve detection efficiency. In order to verify the effectiveness of our method, we simulated and tested 20 public CVE (Common Vulnerabilities and Exposures) XSS attacks. The experimental results show that our proposed method has better accuracy than the single traffic detection model. Among them, the recall rate increased by an average of 48%, the F1 score increased by an average of 27.94%, the accuracy rate increased by 9.29%, and the accuracy rate increased by 3.81%. Moreover, the seven new features proposed in this paper account for 34.12% of the total contribution rate of the classifier.

5 citations

Journal ArticleDOI
TL;DR: In this paper , the authors demonstrate that injection of partially known information at an intermediate layer in a DNN can improve model accuracy, reduce model uncertainty, and yield improved convergence during the training.

4 citations

Journal ArticleDOI
TL;DR: In this article , a review of recent advances in AI-driven materials-by-design and their applications to energetic materials (EM) is presented. And the authors provide a perspective view of these methods in terms of their potential, practicality, and efficacy towards the realization of materials by-design, including meta-learning, active learning, Bayesian learning, and semi-/weakly supervised learning.
Abstract: Artificial intelligence (AI) is rapidly emerging as an enabling tool for solving various complex materials design problems. This paper aims to review recent advances in AI-driven materials-by-design and their applications to energetic materials (EM). Trained with data from numerical simulations and/or physical experiments, AI models can assimilate trends and patterns within the design parameter space, identify optimal material designs (micro-morphologies, combinations of materials in composites, etc.), and point to designs with superior/targeted property and performance metrics. We review approaches focusing on such capabilities with respect to the three main stages of materials-by-design, namely representation learning of microstructure morphology (i.e., shape descriptors), structure-property-performance (S-P-P) linkage estimation, and optimization/design exploration. We provide a perspective view of these methods in terms of their potential, practicality, and efficacy towards the realization of materials-by-design. Specifically, methods in the literature are evaluated in terms of their capacity to learn from a small/limited number of data, computational complexity, generalizability/scalability to other material species and operating conditions, interpretability of the model predictions, and the burden of supervision/data annotation. Finally, we suggest a few promising future research directions for EM materials-by-design, such as meta-learning, active learning, Bayesian learning, and semi-/weakly-supervised learning, to bridge the gap between machine learning research and EM research.

3 citations

Journal ArticleDOI
TL;DR: In this article , the authors developed machine learning methods to reconstruct flow features from sparse sensor measurements during transient vortex-airfoil wake interaction using only a limited amount of training data.
Abstract: Abstract Reconstruction of unsteady vortical flow fields from limited sensor measurements is challenging. We develop machine learning methods to reconstruct flow features from sparse sensor measurements during transient vortex–airfoil wake interaction using only a limited amount of training data. The present machine learning models accurately reconstruct the aerodynamic force coefficients, pressure distributions over airfoil surface, and two-dimensional vorticity field for a variety of untrained cases. Multi-layer perceptron is used for estimating aerodynamic forces and pressure profiles over the surface, establishing a nonlinear model between the pressure sensor measurements and the output variables. A combination of multi-layer perceptron with convolutional neural network is utilized to reconstruct the vortical wake. Furthermore, the use of transfer learning and long short-term memory algorithm combined in the training models greatly improves the reconstruction of transient wakes by embedding the dynamics. The present machine-learning methods are able to estimate the transient flow features while exhibiting robustness against noisy sensor measurements. Finally, appropriate sensor locations over different time periods are assessed for accurately estimating the wakes. The present study offers insights into the dynamics of vortex–airfoil interaction and the development of data-driven flow estimation. Graphic abstract

1 citations

References
More filters
Book ChapterDOI
21 Jun 2000
TL;DR: Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.
Abstract: Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their predictions. The original ensemble method is Bayesian averaging, but more recent algorithms include error-correcting output coding, Bagging, and boosting. This paper reviews these methods and explains why ensembles can often perform better than any single classifier. Some previous studies comparing ensemble methods are reviewed, and some new experiments are presented to uncover the reasons that Adaboost does not overfit rapidly.

5,679 citations

Journal ArticleDOI
TL;DR: In this article, the authors introduce physics-informed neural networks, which are trained to solve supervised learning tasks while respecting any given laws of physics described by general nonlinear partial differential equations.

5,448 citations

Journal ArticleDOI
TL;DR: In this article, a non-iterative method for handling the coupling of the implicitly discretised time-dependent fluid flow equations is described, based on the use of pressure and velocity as dependent variables and is hence applicable to both the compressible and incompressible versions of the transport equations.

4,019 citations

Journal ArticleDOI
13 Feb 2019-Nature
TL;DR: It is argued that contextual cues should be used as part of deep learning to gain further process understanding of Earth system science problems, improving the predictive ability of seasonal forecasting and modelling of long-range spatial connections across multiple timescales.
Abstract: Machine learning approaches are increasingly used to extract patterns and insights from the ever-increasing stream of geospatial data, but current approaches may not be optimal when system behaviour is dominated by spatial or temporal context. Here, rather than amending classical machine learning, we argue that these contextual cues should be used as part of deep learning (an approach that is able to extract spatio-temporal features automatically) to gain further process understanding of Earth system science problems, improving the predictive ability of seasonal forecasting and modelling of long-range spatial connections across multiple timescales, for example. The next step will be a hybrid modelling approach, coupling physical process models with the versatility of data-driven machine learning.

2,014 citations

Proceedings Article
01 Jan 2017
TL;DR: The authors proposed an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates, which are as good or better than approximate Bayesian nNs.
Abstract: Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.

1,769 citations