scispace - formally typeset
Search or ask a question

Methodology for plausibility checking of structural mechanics simulations using deep learning on existing simulation data

TL;DR: An approach to transfer different FE meshes, corresponding FE results and boundary conditions to an individual matrix of fixed size for very different structural mechanic FE simulation, using spherical detector surfaces to project three-dimensional information on its surface.
Abstract: In modern product development, the use of sophisticated simulation tools for assessing the effects of design changes on the intended product behavior is essential. However, setting up valid simulations requires expert knowledge, acquired skills, and sufficient expertise. Design engineers, who perform finite element analysis (FEA) infrequently, must be assisted and their FEA results need to be checked for plausibility. An automatic plausibility check for finite element (FE) simulations in linear structural mechanics can identify non-plausible simulations and warn the user to interpret the results cautiously or ask for expert help. In this context, currently available tools can only compare very similar simulations. However, as the amount of available simulation data in the industry increases more and more, a data-driven simulation check is an obvious next step. Nevertheless, the question arises how simulation data of very different parts and simulations can be transferred to a single software tool, how this tool can learn the relevant rules behind plausible simulations, and how it can be applied to new simulations. In this context, it is especially important to train a metamodel that is able to generalize the rules so that it can later on be applied to unknown simulations. This paper presents an approach to transfer different FE meshes, corresponding FE results and boundary conditions to an individual matrix of fixed size for very different structural mechanic FE simulation. The novel approach uses spherical detector surfaces to project three-dimensional information on its surface. It allows generating the so-called “DNA of an FE simulation”; classification algorithms i.e. Support Vector Machines or Deep Learning Neural Networks such as Convolutional Neural Networks (CNN) can then classify this information. The whole methodology reduces the dimension of a 3D finite element simulation to a 2D matrix of numeric values. The matrix contains all the relevant information for the classification in “plausible” or “non-plausible”. An implausible simulation contains errors, which would be quickly identified by an experienced simulation engineer, whereas a plausible simulation does not contain such errors. As less experienced simulation users in design departments are not trained to find such errors in their simulation setup, they cannot detect them and take adequate countermeasures. In the paper, every single step of the novel methodology for plausibility checking of structural mechanics simulations will be illustrated and explained in detail for simplified parts and corresponding simulations.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
01 Jul 2019
TL;DR: This paper describes and highlights this transition from current product development processes to a data driven / simulation driven product development process, particularly the shifts and changes of different roles and domains.
Abstract: Current trends in product development are digital engineering, the increasing use of assistance tools based on artificial intelligence and in general shorter product lifecycles. These trends and new tools strongly rely on available data and will irreversibly change established product development processes. One example for such a new data driven tool is the plausibility check of linear finite element simulations with Convolutional Neural Networks (CNN). This tool is capable of determining whether new simulation results are plausible or non-plausible according to numeric input data. The digitalization and the increased use of data driven tools employing algorithms known from Artificial Intelligence also shifts the roles of many involved engineers. This paper describes and highlights this transition from current product development processes to a data driven / simulation driven product development process. Particularly, the shifts and changes of different roles and domains are illustrated and an example for changing roles in the design and simulation department is described. Furthermore, required adjustments in the design process are derived and compared to the current status.

10 citations


Cites background or methods from "Methodology for plausibility checki..."

  • ...In this regard, for example assistance systems for the knowledgebased setup and plausibility check of finite-element-simulations (Kestel et al., 2016 and Spruegel et al., 2018) strongly re-shape the established simulation and design verification processes....

    [...]

  • ...A detailed explanation can be found in (Spruegel et al., 2018)....

    [...]

  • ...More detailed information about the methodology can be found in (Spruegel et al., 2018)....

    [...]

01 Jan 2017
TL;DR: Two approaches for the use of deep learning to compute product properties on local level are presented, which are good in processing big data.
Abstract: Within the Transregional Collaborative Research Centre 73 (SFB/TR 73) a self-learning engineering workbench (SLASSY) is being developed. SLASSY assists product developers in designing sheet-bulk metal formed (SBMF) parts by computing product properties based on given product and process charac-teristics. SLASSY enables product developers to evaluate the manufacturabil-ity of their current part design. For this, SLASSY uses data from manufactur-ing experts to create metamodels. Currently, it can handle product properties which apply for a whole part variant (on whole part level), for instance the minimum form filling degree. The further development of the SBMF manufac-turing technology requires the consideration of the product properties in higher detail (on local level). This requires a higher data density, that is, data for each part variant and product property need to be acquired on every point of interest. Due to the increased amount of data, the currently used data mining algorithms in SLASSY for creating the metamodels cannot be reused. To face this challenge, deep learning algorithms are utilized which are good in processing big data. In this contribution, two approaches for the use of deep learning to compute product properties on local level are presented.

7 citations

Journal ArticleDOI
TL;DR: A methodology will be presented to transform different finite element simulations to unified matrices, which can be described as the DNA of a finite element simulation and used as an input for any machine learning model, such as convolutional neural networks.

7 citations

DissertationDOI
01 Jan 2008
TL;DR: Titel, Kurzzusammenfassung, Abstract, Inhalt and Verzeichnisse as mentioned in this paper, Einleitung und Zielsetzung, Material and Methoden
Abstract: Titel, Kurzzusammenfassung, Abstract, Inhalt und Verzeichnisse Einleitung und Zielsetzung Theorie Material und Methoden Ergebnisse und Diskussion Schlussfolgerungen und Ausblick Zusammenfassung Literatur Glossar Danksagung Anhang

5 citations

01 Jan 2016
TL;DR: A methodology for a plausibility check using spherical detector surfaces is presented and it is possible to reduce any FE simulation to matrices of fixed size for each boundary condition and each FEA result variable.
Abstract: Finite Element Analysis (FEA) is a very efficient tool for optimizing product performance and quality. Hence, more simulation engineers with a lot of experience are needed, but they are not available. Consequently other users, such as design engineers, should be able to perform valid, reliable FEA. One of the goals of the research cooperation FORPRO˛ is to create a knowledge-based FEA assistance system with an integrated plausibility check for structural mechanics. Within this paper a methodology for a plausibility check using spherical detector surfaces is presented. Thereby it is possible to reduce any FE simulation to matrices of fixed size for each boundary condition and each FEA result variable. The created matrices can then be combined to form a single larger image. These images can afterwards be classified as plausible or implausible by a Deep Learning Neural Network.

3 citations

References
More filters
Book
01 Jan 2009
TL;DR: The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.
Abstract: Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas. Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.

7,767 citations


Additional excerpts

  • ...The models are capable to discover complex structures within large data sets utilizing backpropagation algorithm (Bengio, 2009; LeCun, Bengio & Hinton, 2015; Deng, 2012)....

    [...]

Posted Content
TL;DR: E elegant connections between the concepts of Informedness, Markedness, Correlation and Significance as well as their intuitive relationships with Recall and Precision are demonstrated.
Abstract: Commonly used evaluation measures including Recall, Precision, F-Measure and Rand Accuracy are biased and should not be used without clear understanding of the biases, and corresponding identification of chance or base case levels of the statistic. Using these measures a system that performs worse in the objective sense of Informedness, can appear to perform better under any of these commonly used measures. We discuss several concepts and measures that reflect the probability that prediction is informed versus chance. Informedness and introduce Markedness as a dual measure for the probability that prediction is marked versus chance. Finally we demonstrate elegant connections between the concepts of Informedness, Markedness, Correlation and Significance as well as their intuitive relationships with Recall and Precision, and outline the extension from the dichotomous case to the general multi-class case.

5,092 citations


"Methodology for plausibility checki..." refers methods in this paper

  • ...Usually goodness-of-fit parameters such as the „accuracy“ (derived from the confusion matrix; Powers, 2011) is calculated....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a survey of the available data mining techniques is provided and a comparative study of such techniques is presented, based on a database researcher's point-of-view.
Abstract: Mining information and knowledge from large databases has been recognized by many researchers as a key research topic in database systems and machine learning, and by many industrial companies as an important area with an opportunity of major revenues. Researchers in many different fields have shown great interest in data mining. Several emerging applications in information-providing services, such as data warehousing and online services over the Internet, also call for various data mining techniques to better understand user behavior, to improve the service provided and to increase business opportunities. In response to such a demand, this article provides a survey, from a database researcher's point of view, on the data mining techniques developed recently. A classification of the available data mining techniques is provided and a comparative study of such techniques is presented.

2,327 citations

Journal ArticleDOI
TL;DR: A basic data-driven design framework with necessary modifications under various industrial operating conditions is sketched, aiming to offer a reference for industrial process monitoring on large-scale industrial processes.
Abstract: Recently, to ensure the reliability and safety of modern large-scale industrial processes, data-driven methods have been receiving considerably increasing attention, particularly for the purpose of process monitoring. However, great challenges are also met under different real operating conditions by using the basic data-driven methods. In this paper, widely applied data-driven methodologies suggested in the literature for process monitoring and fault diagnosis are surveyed from the application point of view. The major task of this paper is to sketch a basic data-driven design framework with necessary modifications under various industrial operating conditions, aiming to offer a reference for industrial process monitoring on large-scale industrial processes.

1,289 citations


"Methodology for plausibility checki..." refers background in this paper

  • ...The most challenging topic, contemplating data-driven design, is dealing with incomplete data (Yin et al., 2014)....

    [...]

Li Deng1
01 Jan 2012
TL;DR: This tutorial survey is to introduce the emerging area of deep learning or hierarchical learning to the APSIPA community and provides a taxonomy-oriented survey on the existing deep architectures and algorithms in the literature, and categorize them into three classes: generative, discriminative, and hybrid.
Abstract: In this invited paper, my overview material on the same topic as presented in the plenary overview session of APSIPA-2011 and the tutorial material presented in the same conference (Deng, 2011) are expanded and updated to include more recent developments in deep learning. The previous and the updated materials cover both theory and applications, and analyze its future directions. The goal of this tutorial survey is to introduce the emerging area of deep learning or hierarchical learning to the APSIPA community. Deep learning refers to a class of machine learning techniques, developed largely since 2006, where many stages of nonlinear information processing in hierarchical architectures are exploited for pattern classification and for feature learning. In the more recent literature, it is also connected to representation learning, which involves a hierarchy of features or concepts where higher-level concepts are defined from lower-level ones and where the same lower-level concepts help to define higher-level ones. In this tutorial, a brief history of deep learning research is discussed first. Then, a classificatory scheme is developed to analyze and summarize major work reported in the deep learning literature. Using this scheme, I provide a taxonomy-oriented survey on the existing deep architectures and algorithms in the literature, and categorize them into three classes: generative, discriminative, and hybrid. Three representative deep architectures --deep auto-encoder, deep stacking network, and deep neural network (pre-trained with deep belief network) --one in each of the three classes, are presented in more detail. Next, selected applications of deep learning are reviewed in broad areas of signal and information processing including audio/speech, image/vision, multimodality, language modeling, natural language processing, and information retrieval. Finally, future directions of deep learning are discussed and analyzed.

119 citations


Additional excerpts

  • ...The models are capable to discover complex structures within large data sets utilizing backpropagation algorithm (Bengio, 2009; LeCun, Bengio & Hinton, 2015; Deng, 2012)....

    [...]