scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Deep convolutional neural networks for estimating porous material parameters with ultrasound tomography

TL;DR: The feasibility of data based machine learning applied to ultrasound tomography is studied to estimate water-saturated porous material parameters, and a high-order discontinuous Galerkin method is considered, while deep convolutional neural networks are used to solve the parameter estimation problem.
Abstract: We study the feasibility of data based machine learning applied to ultrasound tomography to estimate water-saturated porous material parameters. In this work, the data to train the neural networks is simulated by solving wave propagation in coupled poroviscoelastic-viscoelastic-acoustic media. As the forward model, we consider a high-order discontinuous Galerkin method while deep convolutional neural networks are used to solve the parameter estimation problem. In the numerical experiment, we estimate the material porosity and tortuosity while the remaining parameters which are of less interest are successfully marginalized in the neural networks-based inversion. Computational examples confirms the feasibility and accuracy of this approach.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A broad overview of the extensive impact computational modeling has had in materials science in the past few decades is presented in this article, with a focus on where the path forward lies as this rapidly expanding field evolves to meet the challenges of the next few decades.
Abstract: Modeling and simulation is transforming modern materials science, becoming an important tool for the discovery of new materials and material phenomena, for gaining insight into the processes that govern materials behavior, and, increasingly, for quantitative predictions that can be used as part of a design tool in full partnership with experimental synthesis and characterization. Modeling and simulation is the essential bridge from good science to good engineering, spanning from fundamental understanding of materials behavior to deliberate design of new materials technologies leveraging new properties and processes. This Roadmap presents a broad overview of the extensive impact computational modeling has had in materials science in the past few decades, and offers focused perspectives on where the path forward lies as this rapidly expanding field evolves to meet the challenges of the next few decades. The Roadmap offers perspectives on advances within disciplines as diverse as phase field methods to model mesoscale behavior and molecular dynamics methods to deduce the fundamental atomic-scale dynamical processes governing materials response, to the challenges involved in the interdisciplinary research that tackles complex materials problems where the governing phenomena span different scales of materials behavior requiring multiscale approaches. The shift from understanding fundamental materials behavior to development of quantitative approaches to explain and predict experimental observations requires advances in the methods and practice in simulations for reproducibility and reliability, and interacting with a computational ecosystem that integrates new theory development, innovative applications, and an increasingly integrated software and computational infrastructure that takes advantage of the increasingly powerful computational methods and computing hardware.

108 citations


Additional excerpts

  • ...Thus, deep learning and artificial intelligence algorithms can be used to solve the inverse problem to determine on-the-fly the actions to minimize defects during processing [122, 123]....

    [...]

Journal ArticleDOI
TL;DR: A hybrid ML method using a combination of artificial neural network (ANN) and genetic algorithm (GA) to implicitly build a nonlinear relationship between pore structure parameters and permeability is proposed.
Abstract: Permeability prediction is crucial in shale gas and CO2 geological sequestration. However, the intricate pore structure complicates the prediction of permeability. Machine learning (ML) is a promising approach for predicting inherent correlations in large data sets. In this paper, a hybrid ML method is proposed to implicitly build a nonlinear relationship between pore structure parameters and permeability. For the dataset preparation, an improved quartet structure generation set algorithm was firstly developed to generate 1000 porous media. Then, the pore structure parameters were extracted as input parameters and the permeability was calculated as the output parameter. For the ML modelling, a hybrid ML method was proposed using a combination of artificial neural network (ANN) and genetic algorithm (GA). The ANN was employed to learn the nonlinear relationships and GA was used to tune ANN architecture for the best performance. The prediction results show that the GA–ANN was robust in predicting permeability based on pore structure parameters. The ANN model with the optimum architecture could achieve an average R value of 0.998 on the training set and 0.999 on the testing set. Practically, the porous sample can be obtained through micro-computed tomography (CT) or nano-CT, and the proposed framework can be applied to real porous media. Fast prediction of permeability based on formation factors can provide some insights on reservoir evaluation and reservoir stimulation.

61 citations

Journal ArticleDOI
TL;DR: It is shown that the CNNs can be used to predict the porosity, permeability, and tortuosity with good accuracy.
Abstract: Convolutional neural networks (CNN) are utilized to encode the relation between initial configurations of obstacles and three fundamental quantities in porous media: porosity ([Formula: see text]), permeability (k), and tortuosity (T). The two-dimensional systems with obstacles are considered. The fluid flow through a porous medium is simulated with the lattice Boltzmann method. The analysis has been performed for the systems with [Formula: see text] which covers five orders of magnitude a span for permeability [Formula: see text] and tortuosity [Formula: see text]. It is shown that the CNNs can be used to predict the porosity, permeability, and tortuosity with good accuracy. With the usage of the CNN models, the relation between T and [Formula: see text] has been obtained and compared with the empirical estimate.

54 citations

Journal ArticleDOI
TL;DR: This work proposes a reconstruction method based on convolutional neural networks (CNN) which can take full advantage of the large amount tomographic data to build an efficient neural networks to rapidly predict the reconstruction by feeding the sinograms to it.
Abstract: Nonlinear tomographic absorption spectroscopy (NTAS) is an emerging gas sensing technique for reactive flows that has been proven to be capable of simultaneously imaging temperature and concentration of absorbing gas. However, the nonlinear tomographic problems are typically solved with an optimization algorithm such as simulated annealing which suffers from high computational cost. This problem becomes more severe when thousands of tomographic data needs to be processed for the temporal resolution of turbulent flames. To overcome this limitation, in this work we propose a reconstruction method based on convolutional neural networks (CNN) which can take full advantage of the large amount tomographic data to build an efficient neural networks to rapidly predict the reconstruction by feeding the sinograms to it. Simulative studies were performed to investigate how the parameters will affect the performance of neural networks. The results show that CNN can effectively reduce the computational cost and at the same time achieve a similar accuracy level as SA. The successful demonstration CNN in this work indicates possible applications of other sophisticated deep neural networks such as deep belief networks (DBN) and generative adversarial networks (GAN) to nonlinear tomography. © 2018 Elsevier Ltd.

42 citations

Journal ArticleDOI
TL;DR: In this article, the feasibility of a data-based artificial neural network (ANN) for the estimation of the sound absorption coefficient of a layered fibrous material is investigated, and the results indicated that the ANN model exhibits a good correlation between the estimated and measured absorption coefficient.

30 citations

References
More filters
Proceedings Article
01 Jan 2015
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

111,197 citations

Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Book
01 Jan 2009
TL;DR: The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.
Abstract: Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas. Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.

7,767 citations

Journal ArticleDOI
TL;DR: In this article, a theory for the propagation of stress waves in a porous elastic solid containing compressible viscous fluid is developed for the lower frequency range where the assumption of Poiseuille flow is valid.
Abstract: A theory is developed for the propagation of stress waves in a porous elastic solid containing compressible viscous fluid. The emphasis of the present treatment is on materials where fluid and solid are of comparable densities as for instance in the case of water‐saturated rock. The paper denoted here as Part I is restricted to the lower frequency range where the assumption of Poiseuille flow is valid. The extension to the higher frequencies will be treated in Part II. It is found that the material may be described by four nondimensional parameters and a characteristic frequency. There are two dilatational waves and one rotational wave. The physical interpretation of the result is clarified by treating first the case where the fluid is frictionless. The case of a material containing viscous fluid is then developed and discussed numerically. Phase velocity dispersion curves and attenuation coefficients for the three types of waves are plotted as a function of the frequency for various combinations of the characteristic parameters.

7,172 citations