scispace - formally typeset
Open AccessJournal ArticleDOI

NN-EUCLID: deep-learning hyperelasticity without stress data

TLDR
In this paper , the authors propose an approach for unsupervised learning of hyperelastic constitutive laws with physics-consistent deep neural networks, where the absence of stress labels is compensated for by leveraging a physics-motivated loss function based on the conservation of linear momentum to guide the learning process.
Abstract
We propose a new approach for unsupervised learning of hyperelastic constitutive laws with physics-consistent deep neural networks. In contrast to supervised learning, which assumes the availability of stress-strain pairs, the approach only uses realistically measurable full-field displacement and global reaction force data, thus it lies within the scope of our recent framework for Efficient Unsupervised Constitutive Law Identification and Discovery (EUCLID) and we denote it as NN-EUCLID. The absence of stress labels is compensated for by leveraging a physics-motivated loss function based on the conservation of linear momentum to guide the learning process. The constitutive model is based on input-convex neural networks, which are capable of learning a function that is convex with respect to its inputs. By employing a specially designed neural network architecture, multiple physical and thermodynamic constraints for hyperelastic constitutive laws, such as material frame indifference, (poly-)convexity, and stress-free reference configuration are automatically satisfied. We demonstrate the ability of the approach to accurately learn several hidden isotropic and anisotropic hyperelastic constitutive laws - including e.g., Mooney-Rivlin, Arruda-Boyce, Ogden, and Holzapfel models - without using stress data. For anisotropic hyperelasticity, the unknown anisotropic fiber directions are automatically discovered jointly with the constitutive model. The neural network-based constitutive models show good generalization capability beyond the strain states observed during training and are readily deployable in a general finite element framework for simulating complex mechanical boundary value problems with good accuracy.

read more

Citations
More filters
Journal ArticleDOI

A new family of Constitutive Artificial Neural Networks towards automated model discovery

TL;DR: In this paper , a new family of constitutive artificial neural networks (CANNs) is proposed that inherently satisfy common kinematic, thermodynamic, and physic constraints and constrain the design space of admissible functions to create robust approximators.
Journal ArticleDOI

Automated discovery of generalized standard material models with EUCLID

TL;DR: In this article , the authors extend the approach of unsupervised automated discovery of material laws (denoted as EUCLID) to the general case of a material belonging to an unknown class of constitutive behavior.
Journal ArticleDOI

Finite electro-elasticity with physics-augmented neural networks

TL;DR: In this paper , a machine learning based constitutive model for electro-mechanically coupled material behavior at finite deformations is proposed, which fulfills the polyconvexity condition which ensures material stability, as well as thermodynamic consistency, objectivity, material symmetry, and growth conditions.
Journal ArticleDOI

FEANN - An efficient data-driven multiscale approach based on physics-constrained neural networks and automated data mining

TL;DR: A new data-driven multiscale framework based on the usage of physics-constrained artificial neural networks (ANNs) as macroscopic surrogate models and an autonomous data mining process to reduce the number of time-consuming microscale simulations to a minimum is presented.
Journal ArticleDOI

Automated identification of linear viscoelastic constitutive laws with EUCLID

TL;DR: In this paper , the authors extend EUCLID to linear viscoelasticity by adopting a generalized Maxwell model expressed by a Prony series and deploying it for identification.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Posted Content

Deep Residual Learning for Image Recognition

TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Multilayer feedforward networks are universal approximators

TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Related Papers (5)