scispace - formally typeset
Search or ask a question
Author

Michael Moeller

Bio: Michael Moeller is an academic researcher from University of Siegen. The author has contributed to research in topics: Deep learning & Artificial neural network. The author has an hindex of 20, co-authored 75 publications receiving 1772 citations. Previous affiliations of Michael Moeller include University of Münster & Technische Universität München.


Papers
More filters
Proceedings Article
01 Jan 2020
TL;DR: In this paper, the authors show that it is possible to reconstruct images at high resolution from the knowledge of their parameter gradients, and demonstrate that such a break of privacy is possible even for trained deep networks.
Abstract: The idea of federated learning is to collaboratively train a neural network on a server. Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data. This protocol has been designed not only to train neural networks data-efficiently, but also to provide privacy benefits for users, as their input data remains on device and only parameter gradients are shared. But how secure is sharing parameter gradients? Previous attacks have provided a false sense of security, by succeeding only in contrived settings - even for a single image. However, by exploiting a magnitude-invariant loss along with optimization strategies based on adversarial attacks, we show that is is actually possible to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and demonstrate that such a break of privacy is possible even for trained deep networks. We analyze the effects of architecture as well as parameters on the difficulty of reconstructing an input image and prove that any input to a fully connected layer can be reconstructed analytically independent of the remaining architecture. Finally we discuss settings encountered in practice and show that even averaging gradients over several iterations or several images does not protect the user's privacy in federated learning applications in computer vision.

423 citations

Journal ArticleDOI
TL;DR: Two new modifications to improve the spectral quality of the Intensity-Hue-Saturation method are introduced and an adaptive IHS is proposed that incorporates these two techniques.
Abstract: The goal of pan-sharpening is to fuse a low spatial resolution multispectral image with a higher resolution panchromatic image to obtain an image with high spectral and spatial resolution. The Intensity-Hue-Saturation (IHS) method is a popular pan-sharpening method used for its efficiency and high spatial resolution. However, the final image produced experiences spectral distortion. In this letter, we introduce two new modifications to improve the spectral quality of the image. First, we propose image-adaptive coefficients for IHS to obtain more accurate spectral resolution. Second, an edge-adaptive IHS method was proposed to enforce spectral fidelity away from the edges. Experimental results show that these two modifications improve spectral resolution compared to the original IHS and we propose an adaptive IHS that incorporates these two techniques. The adaptive IHS method produces images with higher spectral resolution while maintaining the high-quality spatial resolution of the original IHS.

390 citations

Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this paper, a fixed denoising neural network is proposed to replace the proximal operator of the regularization used in many convex energy minimization algorithms by a denoizing neural network.
Abstract: While variational methods have been among the most powerful tools for solving linear inverse problems in imaging, deep (convolutional) neural networks have recently taken the lead in many challenging benchmarks. A remaining drawback of deep learning approaches is their requirement for an expensive retraining whenever the specific problem, the noise level, noise type, or desired measure of fidelity changes. On the contrary, variational methods have a plug-and-play nature as they usually consist of separate data fidelity and regularization terms. In this paper we study the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network. The latter therefore serves as an implicit natural image prior, while the data term can still be chosen independently. Using a fixed denoising neural network in exemplary problems of image deconvolution with different blur kernels and image demosaicking, we obtain state-of-the-art reconstruction results. These indicate the high generalizability of our approach and a reduction of the need for problemspecific training. Additionally, we discuss novel results on the analysis of possible optimization algorithms to incorporate the network into, as well as the choices of algorithm parameters and their relation to the noise level the neural network is trained on.

323 citations

Proceedings ArticleDOI
TL;DR: In this article, a fixed denoising neural network is proposed to replace the proximal operator of the regularization used in many convex energy minimization algorithms by a denoizing neural network.
Abstract: While variational methods have been among the most powerful tools for solving linear inverse problems in imaging, deep (convolutional) neural networks have recently taken the lead in many challenging benchmarks. A remaining drawback of deep learning approaches is their requirement for an expensive retraining whenever the specific problem, the noise level, noise type, or desired measure of fidelity changes. On the contrary, variational methods have a plug-and-play nature as they usually consist of separate data fidelity and regularization terms. In this paper we study the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network. The latter therefore serves as an implicit natural image prior, while the data term can still be chosen independently. Using a fixed denoising neural network in exemplary problems of image deconvolution with different blur kernels and image demosaicking, we obtain state-of-the-art reconstruction results. These indicate the high generalizability of our approach and a reduction of the need for problem-specific training. Additionally, we discuss novel results on the analysis of possible optimization algorithms to incorporate the network into, as well as the choices of algorithm parameters and their relation to the noise level the neural network is trained on.

146 citations

Journal ArticleDOI
TL;DR: Results on the orthogonality of the decomposition, a Parseval-type identity and the notion of generalized (nonlinear) eigenvectors closely link the nonlinear multiscale decompositions to the well-known linear filtering theory.
Abstract: This paper discusses the use of absolutely one-homogeneous regularization functionals in a variational, scale space, and inverse scale space setting to define a nonlinear spectral decomposition of input data. We present several theoretical results that explain the relation between the different definitions. Additionally, results on the orthogonality of the decomposition, a Parseval-type identity, and the notion of generalized (nonlinear) eigenvectors closely link our nonlinear multiscale decompositions to the well-known linear filtering theory. Numerical results are used to illustrate our findings.

108 citations


Cited by
More filters
Book
21 Feb 1970

986 citations

Journal ArticleDOI
TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.

871 citations

Journal ArticleDOI
TL;DR: In this paper, the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function is analyzed, subject to coupled linear equality constraints.
Abstract: In this paper, we analyze the convergence of the alternating direction method of multipliers (ADMM) for minimizing a nonconvex and possibly nonsmooth objective function, $$\phi (x_0,\ldots ,x_p,y)$$ , subject to coupled linear equality constraints. Our ADMM updates each of the primal variables $$x_0,\ldots ,x_p,y$$ , followed by updating the dual variable. We separate the variable y from $$x_i$$ ’s as it has a special role in our analysis. The developed convergence guarantee covers a variety of nonconvex functions such as piecewise linear functions, $$\ell _q$$ quasi-norm, Schatten-q quasi-norm ( $$0

867 citations