scispace - formally typeset
M

Marius Zöllner

Researcher at Center for Information Technology

Publications -  10
Citations -  724

Marius Zöllner is an academic researcher from Center for Information Technology. The author has contributed to research in topics: Network architecture & External Data Representation. The author has an hindex of 4, co-authored 10 publications receiving 545 citations. Previous affiliations of Marius Zöllner include Karlsruhe Institute of Technology & Forschungszentrum Informatik.

Papers
More filters
Proceedings ArticleDOI

MultiNet: Real-time Joint Semantic Reasoning for Autonomous Driving

TL;DR: This paper presents an approach to joint classification, detection and semantic segmentation using a unified architecture where the encoder is shared amongst the three tasks, and performs extremely well in the challenging KITTI dataset.
Book ChapterDOI

Towards Grasping with Spiking Neural Networks for Anthropomorphic Robot Hands

TL;DR: A hierarchical spiking neural network with biologically inspired architecture for representing different grasp motions is presented and the ability of learning finger coordination and synergies between joints that can be used for grasping is demonstrated.
Proceedings ArticleDOI

DSRC and radar object matching for cooperative driver assistance systems

TL;DR: This paper proposes a system architecture to fuse DSRC and radar data that uses a reliable statistical track-to-track association algorithm in a novel way to solve this data matching problem.
Posted Content

Boosting LiDAR-based Semantic Labeling by Cross-Modal Training Data Generation

TL;DR: A novel deep neural network architecture called LiLaNet is presented for point-wise, multi-class semantic labeling of semi-dense LiDAR data and an automated process for large-scale cross-modal training data generation called Autolabeling is proposed, in order to boost semantic labeling performance while keeping the manual annotation effort low.
Book ChapterDOI

Improved Semantic Stixels via Multimodal Sensor Fusion

TL;DR: The results indicate that the proposed mid-level fusion of LiDAR and camera data improves both the geometric and semantic accuracy of the Stixel model significantly while reducing the computational overhead as well as the amount of generated data in comparison to using a single modality on its own.