scispace - formally typeset
M

Martin Rünz

Researcher at University College London

Publications -  10
Citations -  668

Martin Rünz is an academic researcher from University College London. The author has contributed to research in topics: Segmentation & 3D reconstruction. The author has an hindex of 6, co-authored 8 publications receiving 403 citations. Previous affiliations of Martin Rünz include University of Koblenz and Landau.

Papers
More filters
Proceedings ArticleDOI

MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects

TL;DR: MaskFusion as discussed by the authors is a real-time object-aware, semantic and dynamic RGB-D SLAM system that goes beyond traditional systems which output a purely geometric map of a static scene.
Proceedings ArticleDOI

Co-fusion: Real-time segmentation, tracking and fusion of multiple objects

TL;DR: Co-Fusion, a dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects while simultaneously tracking and reconstructing their 3D shape in real time, can enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.
Proceedings ArticleDOI

Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

TL;DR: Co-Fusion as discussed by the authors uses a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label.
Posted Content

MaskFusion: Real-Time Recognition, Tracking and Reconstruction of Multiple Moving Objects

TL;DR: MaskFusion as mentioned in this paper is a real-time object-aware, semantic and dynamic RGB-D SLAM system that goes beyond traditional systems which output a purely geometric map of a static scene.
Proceedings ArticleDOI

FroDO: From Detections to 3D Objects

TL;DR: FroDO is a method for accurate 3D reconstruction of object instances from RGB video that infers their location, pose and shape in a coarse to fine manner to embed object shapes in a novel learnt shape space that allows seamless switching between sparse point cloud and dense DeepSDF decoding.