scispace - formally typeset
Search or ask a question
Author

Marie E. Rognes

Bio: Marie E. Rognes is an academic researcher from Simula Research Laboratory. The author has contributed to research in topics: Finite element method & Partial differential equation. The author has an hindex of 21, co-authored 86 publications receiving 2950 citations. Previous affiliations of Marie E. Rognes include Umeå University & University of Oslo.


Papers
More filters
DOI
07 Dec 2015
TL;DR: The FEniCS Project is a collaborative project for the development of innovative concepts and tools for automated scientific computing, with a particular focus on the solution of differential equations by finite element methods.
Abstract: The FEniCS Project is a collaborative project for the development of innovative concepts and tools for automated scientific computing, with a particular focus on the solution of differential equations by finite element methods. The FEniCS Projects software consists of a collection of interoperable software components, including DOLFIN, FFC, FIAT, Instant, UFC, UFL, and mshr. This note describes the new features and changes introduced in the release of FEniCS version 1.5.

1,628 citations

Journal ArticleDOI
TL;DR: The Unified Form Language (UFL) as mentioned in this paper is a domain-specific language for representing weak formulations of partial differential equations with a view to numerical approximation, which has been used to effortlessly express finite element methods for complex systems of PDEs in near-mathematical notation.
Abstract: We present the Unified Form Language (UFL), which is a domain-specific language for representing weak formulations of partial differential equations with a view to numerical approximation. Features of UFL include support for variational forms and functionals, automatic differentiation of forms and expressions, arbitrary function space hierarchies for multifield problems, general differential operators and flexible tensor algebra. With these features, UFL has been used to effortlessly express finite element methods for complex systems of partial differential equations in near-mathematical notation, resulting in compact, intuitive and readable programs. We present in this work the language and its construction. An implementation of UFL is freely available as an open-source software library. The library generates abstract syntax tree representations of variational problems, which are used by other software libraries to generate concrete low-level implementations. Some application examples are presented and libraries that support UFL are highlighted.

338 citations

Journal ArticleDOI
TL;DR: A new technique for deriving discrete adjoint and tangent linear models of a finite element model using the FEniCS finite element form compiler, which is significantly more efficient and automatic than standard algorithmic differentiation techniques.
Abstract: In this paper we demonstrate a new technique for deriving discrete adjoint and tangent linear models of a finite element model. The technique is significantly more efficient and automatic than standard algorithmic differentiation techniques. The approach relies on a high-level symbolic representation of the forward problem. In contrast to developing a model directly in Fortran or C++, high-level systems allow the developer to express the variational problems to be solved in near-mathematical notation. As such, these systems have a key advantage: since the mathematical structure of the problem is preserved, they are more amenable to automated analysis and manipulation. The framework introduced here is implemented in a freely available software package named dolfin-adjoint, based on the FEniCS Project. Our approach to automated adjoint derivation relies on run-time annotation of the temporal structure of the model and employs the FEniCS finite element form compiler to automatically generate the low-level co...

250 citations

Posted Content
TL;DR: The Unified Form Language is presented, which is a domain-specific language for representing weak formulations of partial differential equations with a view to numerical approximation and generates abstract syntax tree representations of variational problems, which are used by other software libraries to generate concrete low-level implementations.
Abstract: We present the Unified Form Language (UFL), which is a domain-specific language for representing weak formulations of partial differential equations with a view to numerical approximation. Features of UFL include support for variational forms and functionals, automatic differentiation of forms and expressions, arbitrary function space hierarchies for multi-field problems, general differential operators and flexible tensor algebra. With these features, UFL has been used to effortlessly express finite element methods for complex systems of partial differential equations in near-mathematical notation, resulting in compact, intuitive and readable programs. We present in this work the language and its construction. An implementation of UFL is freely available as an open-source software library. The library generates abstract syntax tree representations of variational problems, which are used by other software libraries to generate concrete low-level implementations. Some application examples are presented and libraries that support UFL are highlighted.

218 citations

Journal ArticleDOI
TL;DR: In this paper, a finite element method for the Stokes problem on fictitious domains is presented, which is based on a stabilized Nitsche method with ghost penalties for the velocity and pressure.
Abstract: We present a novel finite element method for the Stokes problem on fictitious domains. We prove inf-sup stability, optimal order convergence and uniform boundedness of the condition number of the discrete system. The finite element formulation is based on a stabilized Nitsche method with ghost penalties for the velocity and pressure to obtain stability in the presence of small cut elements. We demonstrate for the first time the applicability of the Nitsche fictitious domain method to three-dimensional Stokes problems. We further discuss a general, flexible and freely available implementation of the method and present numerical examples supporting the theoretical results.

166 citations


Cited by
More filters
Book
24 Feb 2012
TL;DR: This book is a tutorial written by researchers and developers behind the FEniCS Project and explores an advanced, expressive approach to the development of mathematical software.
Abstract: This book is a tutorial written by researchers and developers behind the FEniCS Project and explores an advanced, expressive approach to the development of mathematical software. The presentation spans mathematical background, software design and the use of FEniCS in applications. Theoretical aspects are complemented with computer code which is available as free/open source software. The book begins with a special introductory tutorial for beginners. Followingare chapters in Part I addressing fundamental aspects of the approach to automating the creation of finite element solvers. Chapters in Part II address the design and implementation of the FEnicS software. Chapters in Part III present the application of FEniCS to a wide range of applications, including fluid flow, solid mechanics, electromagnetics and geophysics.

2,372 citations

Journal Article
TL;DR: This is a paid internship where interns work directly to assist the Director of Marketing and Communications on various tasks relating to upcoming GRA events.
Abstract: OVERVIEW The GRA Marketing Internship Program is offered to students who are interested in gaining valuable work experience through efforts in marketing, membership, sales, and events. Interns work directly to assist the Director of Marketing and Communications on various tasks relating to upcoming GRA events. During this internship, students will work a minimum of 10 hours a week and a maximum of 20 hours a week. Students are encouraged to earn credit for their internship, however this is a paid internship. Students interested in obtaining credit for their internship must consult their academic advisor or the intern coordinator at their academic unit.

1,309 citations

Journal ArticleDOI
01 Jun 2021
TL;DR: Some of the prevailing trends in embedding physics into machine learning are reviewed, some of the current capabilities and limitations are presented and diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems are discussed.
Abstract: Despite great progress in simulating multiphysics problems using the numerical discretization of partial differential equations (PDEs), one still cannot seamlessly incorporate noisy data into existing algorithms, mesh generation remains complex, and high-dimensional problems governed by parameterized PDEs cannot be tackled. Moreover, solving inverse problems with hidden physics is often prohibitively expensive and requires different formulations and elaborate computer codes. Machine learning has emerged as a promising alternative, but training deep neural networks requires big data, not always available for scientific problems. Instead, such networks can be trained from additional information obtained by enforcing the physical laws (for example, at random points in the continuous space-time domain). Such physics-informed learning integrates (noisy) data and mathematical models, and implements them through neural networks or other kernel-based regression networks. Moreover, it may be possible to design specialized network architectures that automatically satisfy some of the physical invariants for better accuracy, faster training and improved generalization. Here, we review some of the prevailing trends in embedding physics into machine learning, present some of the current capabilities and limitations and discuss diverse applications of physics-informed learning both for forward and inverse problems, including discovering hidden physics and tackling high-dimensional problems. The rapidly developing field of physics-informed learning integrates data and mathematical models seamlessly, enabling accurate inference of realistic and high-dimensional multiphysics problems. This Review discusses the methodology and provides diverse examples and an outlook for further developments.

1,114 citations

Proceedings Article
03 Dec 2018
TL;DR: In this paper, the authors introduce a new family of deep neural network models called continuous normalizing flows, which parameterize the derivative of the hidden state using a neural network, and the output of the network is computed using a black-box differential equation solver.
Abstract: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

1,082 citations

Posted Content
TL;DR: In this paper, the authors introduce a new family of deep neural network models called continuous normalizing flows, which parameterize the derivative of the hidden state using a neural network, and the output of the network is computed using a black-box differential equation solver.
Abstract: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

1,033 citations