scispace - formally typeset
Search or ask a question
Author

Franz-Josef Pfreundt

Bio: Franz-Josef Pfreundt is an academic researcher from Fraunhofer Society. The author has contributed to research in topics: Deep learning & Stochastic gradient descent. The author has an hindex of 13, co-authored 63 publications receiving 592 citations. Previous affiliations of Franz-Josef Pfreundt include Fraunhofer Institute for Industrial Mathematics & Kaiserslautern University of Technology.


Papers
More filters
Posted Content
TL;DR: This work presents a simple way to detect fake face images - so-called DeepFakes, based on a classical frequency domain analysis followed by basic classifier, which shows very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios.
Abstract: Deep generative models have recently achieved impressive results for many real-world applications, successfully generating high-resolution and diverse samples from complex datasets. Due to this improvement, fake digital contents have proliferated growing concern and spreading distrust in image content, leading to an urgent need for automated ways to detect these AI-generated fake images. Despite the fact that many face editing algorithms seem to produce realistic human faces, upon closer examination, they do exhibit artifacts in certain domains which are often hidden to the naked eye. In this work, we present a simple way to detect such fake face images - so-called DeepFakes. Our method is based on a classical frequency domain analysis followed by basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, our approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. For the evaluation on high resolution face images, we combined several public datasets of real and fake faces into a new benchmark: Faces-HQ. Given such high-resolution images, our approach reaches a perfect classification accuracy of 100% when it is trained on as little as 20 annotated samples. In a second experiment, in the evaluation of the medium-resolution images of the CelebA dataset, our method achieves 100% accuracy supervised and 96% in an unsupervised setting. Finally, evaluating a low-resolution video sequences of the FaceForensics++ dataset, our method achieves 91% accuracy detecting manipulated videos.

131 citations

Posted Content
TL;DR: The presented results show, that the current state of the art approach, using data-parallelized Stochastic Gradient Descent (SGD), is quickly turning into a vastly communication bound problem, leading to poor scalability of DNN training in most practical scenarios.
Abstract: This paper presents a theoretical analysis and practical evaluation of the main bottlenecks towards a scalable distributed solution for the training of Deep Neuronal Networks (DNNs). The presented results show, that the current state of the art approach, using data-parallelized Stochastic Gradient Descent (SGD), is quickly turning into a vastly communication bound problem. In addition, we present simple but fixed theoretic constraints, preventing effective scaling of DNN training beyond only a few dozen nodes. This leads to poor scalability of DNN training in most practical scenarios.

73 citations

Journal ArticleDOI
TL;DR: In this paper, an algebraic-geometrically motived integration-by-parts (IBP) re-duction method for multi-loop and multi-scale Feynman integrals is introduced, using a framework for massively parallel computations in computer algebra.
Abstract: We introduce an algebro-geometrically motived integration-by-parts (IBP) re- duction method for multi-loop and multi-scale Feynman integrals, using a framework for massively parallel computations in computer algebra. This framework combines the com- puter algebra system Singular with the workflow management system GPI-Space, which are being developed at the TU Kaiserslautern and the Fraunhofer Institute for Industrial Mathematics (ITWM), respectively. In our approach, the IBP relations are first trimmed by modern tools from computational algebraic geometry and then solved by sparse linear algebra and our new interpolation method. Modelled in terms of Petri nets, these steps are efficiently automatized and automatically parallelized by GPI-Space. We demonstrate the potential of our method at the nontrivial example of reducing two-loop five-point non- planar double-pentagon integrals. We also use GPI-Space to convert the basis of IBP reductions, and discuss the possible simplification of master-integral coefficients in a uni- formly transcendental basis.

50 citations

Proceedings ArticleDOI
12 Nov 2011
TL;DR: This work has developed an FPGA-accelerated architectural simulation platform to accurately model the power and performance of the Green Wave design, a general-purpose manycore chip design optimized for high-order wave equations called "Green Wave".
Abstract: Reverse Time Migration (RTM) has become the standard for high-quality imaging in the seismic industry. RTM relies on PDE solutions using stencils that are 8th order or larger, which require large-scale HPC clusters to meet the computational demands. However, the rising power consumption of conventional cluster technology has prompted investigation of architectural alternatives that offer higher computational efficiency. In this work, we compare the performance and energy efficiency of three architectural alternatives -- the Intel Nehalem X5530 multicore processor, the NVIDIA Tesla C2050 GPU, and a general-purpose manycore chip design optimized for high-order wave equations called "Green Wave." We have developed an FPGA-accelerated architectural simulation platform to accurately model the power and performance of the Green Wave design. Results show that across highly-tuned high-order RTM stencils, the Green Wave implementation can offer up to 8x and 3.5x energy efficiency improvement per node respectively, compared with the Nehalem and GPU platforms. These results point to the enormous potential energy advantages of our hardware/software co-design methodology.

49 citations

Journal ArticleDOI
TL;DR: An algebro-geometrically motived integration-by-parts (IBP) method for multi-loop and multi-scale Feynman integrals, using a framework for massively parallel computations in computer algebra.
Abstract: We introduce an algebro-geometrically motived integration-by-parts (IBP) reduction method for multi-loop and multi-scale Feynman integrals, using a framework for massively parallel computations in computer algebra. This framework combines the computer algebra system Singular with the workflow management system GPI-Space, which is being developed at the Fraunhofer Institute for Industrial Mathematics (ITWM). In our approach, the IBP relations are first trimmed by modern algebraic geometry tools and then solved by sparse linear algebra and our new interpolation methods. These steps are efficiently automatized and automatically parallelized by modeling the algorithm in GPI-Space using the language of Petri-nets. We demonstrate the potential of our method at the nontrivial example of reducing two-loop five-point nonplanar double-pentagon integrals. We also use GPI-Space to convert the basis of IBP reductions, and discuss the possible simplification of IBP coefficients in a uniformly transcendental basis.

41 citations


Cited by
More filters
01 Jan 2006

3,012 citations

01 Jan 2007
TL;DR: Two algorithms for generating the Gaussian quadrature rule defined by the weight function when: a) the three term recurrence relation is known for the orthogonal polynomials generated by $\omega$(t), and b) the moments of the weightfunction are known or can be calculated.
Abstract: Most numerical integration techniques consist of approximating the integrand by a polynomial in a region or regions and then integrating the polynomial exactly. Often a complicated integrand can be factored into a non-negative ''weight'' function and another function better approximated by a polynomial, thus $\int_{a}^{b} g(t)dt = \int_{a}^{b} \omega (t)f(t)dt \approx \sum_{i=1}^{N} w_i f(t_i)$. Hopefully, the quadrature rule ${\{w_j, t_j\}}_{j=1}^{N}$ corresponding to the weight function $\omega$(t) is available in tabulated form, but more likely it is not. We present here two algorithms for generating the Gaussian quadrature rule defined by the weight function when: a) the three term recurrence relation is known for the orthogonal polynomials generated by $\omega$(t), and b) the moments of the weight function are known or can be calculated.

1,007 citations

Journal ArticleDOI
Tal Ben-Nun1, Torsten Hoefler1
TL;DR: The problem of parallelization in DNNs is described from a theoretical perspective, followed by approaches for its parallelization, and potential directions for parallelism in deep learning are extrapolated.
Abstract: Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.

433 citations