scispace - formally typeset
N

Noha Radwan

Researcher at University of Freiburg

Publications -  17
Citations -  1270

Noha Radwan is an academic researcher from University of Freiburg. The author has contributed to research in topics: Deep learning & Multi-task learning. The author has an hindex of 11, co-authored 17 publications receiving 545 citations. Previous affiliations of Noha Radwan include Google.

Papers
More filters
Posted Content

NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

TL;DR: A learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs, and applies it to internet photo collections of famous landmarks, to demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.
Journal ArticleDOI

VLocNet++: Deep Multitask Learning for Semantic Visual Localization and Odometry

TL;DR: VLocNet++ as mentioned in this paper employs a multitask learning approach to exploit the inter-task relationship between learning semantics, regressing 6-DoF global pose and odometry, for the mutual benefit of each of these tasks.
Proceedings ArticleDOI

Deep Auxiliary Learning for Visual Localization and Odometry

TL;DR: VLocNet as discussed by the authors proposes a novel loss function that utilizes auxiliary learning to leverage relative pose information during training, thereby constraining the search space to obtain consistent pose estimates and achieves competitive performance for visual odometry estimation.
Proceedings ArticleDOI

NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

TL;DR: In this article, a learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs is presented, which uses the weights of a multi-layer perceptron to model the density and color of a scene as a function of 3D coordinates.
Posted Content

Deep Auxiliary Learning for Visual Localization and Odometry.

TL;DR: VLocNet as mentioned in this paper proposes a novel loss function that utilizes auxiliary learning to leverage relative pose information during training, thereby constraining the search space to obtain consistent pose estimates and achieves competitive performance for visual odometry estimation.