scispace - formally typeset
A

Akhil Gurram

Researcher at Autonomous University of Barcelona

Publications -  5
Citations -  28

Akhil Gurram is an academic researcher from Autonomous University of Barcelona. The author has contributed to research in topics: Encoder & Image segmentation. The author has an hindex of 2, co-authored 5 publications receiving 18 citations. Previous affiliations of Akhil Gurram include Huawei.

Papers
More filters
Proceedings ArticleDOI

Monocular Depth Estimation by Learning from Heterogeneous Datasets

TL;DR: This paper shows that one can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets, and combines KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on monocular depth estimation.
Posted Content

Monocular Depth Estimation by Learning from Heterogeneous Datasets

TL;DR: In this article, the authors leverage the depth and semantic information coming from heterogeneous datasets to train CNNs for monocular depth estimation, which outperforms state-of-the-art results on KITTI depth and Cityscapes semantic segmentation datasets.
Journal ArticleDOI

Semantic Monocular Depth Estimation Based on Artificial Intelligence

TL;DR: This paper shows that you can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets, and combines KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on monocular depth estimation.
Posted Content

Monocular Depth Estimation through Virtual-world Supervision and Real-world SfM Self-Supervision.

TL;DR: In this article, a virtual-world supervision (MonoDEVS) and real-world SfM self-supervision is proposed to compensate the SfMs limitations by leveraging virtual world images with accurate semantic and depth supervision and addressing the virtual to real domain gap.
Posted Content

TridentAdapt: Learning Domain-invariance via Source-Target Confrontation and Self-induced Cross-domain Augmentation.

TL;DR: TridentAdapt as mentioned in this paper proposes a trident-like architecture that enforces a shared feature encoder to satisfy confrontational source and target constraints simultaneously, thus learning a domain-invariant feature space.