G
Gesina Schwalbe
Researcher at Continental AG
Publications - 17
Citations - 120
Gesina Schwalbe is an academic researcher from Continental AG. The author has contributed to research in topics: Computer science & Safety case. The author has an hindex of 5, co-authored 11 publications receiving 56 citations. Previous affiliations of Gesina Schwalbe include University of Bamberg & Continental Automotive Systems.
Papers
More filters
A Survey on Methods for the Safety Assurance of Machine Learning Based Systems
TL;DR: This work provides a structured, certification oriented overview on available methods supporting the safety argumen-tation of a ML based system, sorted into life-cycle phases, and maturity of the approach as well as applicability to different ML types are collected.
Book ChapterDOI
Expressive Explanations of DNNs by Combining Concept Analysis with ILP
TL;DR: This paper uses inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN) and shows that the explanation is faithful to the original black-box model.
Book ChapterDOI
Structuring the Safety Argumentation for Deep Neural Network Based Perception in Automotive Applications.
Gesina Schwalbe,Bernhard Knie,Timo Sämann,Timo Dobberphul,Lydia Gauerhof,Shervin Raafatnia,Vittorio Rocco +6 more
TL;DR: A generic approach and template to thoroughly respect DNN specifics within a safety argumentation structure is developed and applicability is shown by providing examples of methods and measures following an example use case based on pedestrian detection.
Posted Content
Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety.
Sebastian Houben,Stephanie Abrecht,Maram Akila,Andreas Bär,Felix Brockherde,Patrick Feifel,Tim Fingscheidt,Sujan Sai Gannamaneni,Seyed Eghbal Ghobadi,Ahmed Hammam,Anselm Haselhoff,Felix Hauser,Christian Heinzemann,Marco Hoffmann,Nikhil Kapoor,Falk Kappel,Marvin Klingner,Jan Kronenberger,Fabian Küppers,Jonas Löhdefink,Michael Mlynarski,Michael Mock,Firas Mualla,Svetlana Pavlitskaya,Maximilian Poretschkin,Alexander Pohl,Varun Ravi Kumar,Julia Rosenzweig,Matthias Rottmann,Stefan Rüping,Timo Sämann,Jan David Schneider,Elena Schulz,Gesina Schwalbe,Joachim Sicking,Toshika Srivastava,Serin Varghese,Michael Weber,Sebastian J. Wirkert,Tim Wirtz,Matthias Woehrle +40 more
TL;DR: In this article, the authors provide a structured and broad overview of the state-of-the-art techniques aiming to address the model-inherent shortcomings of deep neural networks (DNNs).
Concept Enforcement and Modularization as Methods for the ISO 26262 Safety Argumentation of Neural Networks
TL;DR: A unified approach to two methods for NN safety argumentation: Assignment of human interpretable concepts to the internal representation of NNs to enable modularization and formal verification.