scispace - formally typeset
J

Julius von Kügelgen

Researcher at Max Planck Society

Publications -  33
Citations -  397

Julius von Kügelgen is an academic researcher from Max Planck Society. The author has contributed to research in topics: Computer science & Identifiability. The author has an hindex of 8, co-authored 32 publications receiving 184 citations. Previous affiliations of Julius von Kügelgen include Imperial College London & University of Cambridge.

Papers
More filters
Posted Content

Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

TL;DR: This work shows that it is impossible to guarantee recourse without access to the true structural equations, and proposes two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge.
Posted Content

On the Fairness of Causal Algorithmic Recourse.

TL;DR: Two new fairness criteria at the group and individual level are proposed which are based on a causal framework that explicitly models relationships between input features, thereby allowing to capture downstream effects of recourse actions performed in the physical world.
Posted Content

Towards causal generative scene models via competition of experts

TL;DR: This work presents an alternative approach which uses an inductive bias encouraging modularity by training an ensemble of generative models (experts) and allows for controllable sampling of individual objects and recombination of experts in physically plausible ways.
Proceedings Article

Algorithmic recourse under imperfect causal knowledge: a probabilistic approach

TL;DR: The authors proposed two probabilistic approaches to select optimal actions that achieve recourse with high probability given limited causal knowledge (e.g., only the causal graph), and empirically show that the proposed approaches lead to more reliable recommendations under imperfect causal knowledge than non-probabilistic baselines.
Posted Content

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

TL;DR: This paper test whether 17 unsupervised, weakly supervised, and fully supervised representation learning approaches correctly infer the generative factors of variation in simple datasets and observe that all of them struggle to learn the underlying mechanism regardless of supervision signal and architectural bias.