scispace - formally typeset
Z

Zhenpei Yang

Researcher at University of Texas at Austin

Publications -  18
Citations -  306

Zhenpei Yang is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Computer science & Representation (mathematics). The author has an hindex of 6, co-authored 14 publications receiving 124 citations.

Papers
More filters
Proceedings ArticleDOI

SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving

TL;DR: This paper presents a simple yet effective approach to generate realistic scenario sensor data, based only on a limited amount of lidar and camera data collected by an autonomous vehicle, using texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes, preserving rich information about object 3D geometry and appearance, as well as the scene conditions.
Journal ArticleDOI

Deep Generative Modeling for Scene Synthesis via Hybrid Representations

TL;DR: A deep generative scene modeling technique using a feed-forward neural network that maps a prior distribution to the distribution of primary objects in indoor scenes, and introduces a 3D object arrangement representation that models the locations and orientations of objects, based on their size and shape attributes.

Implicit Autoencoder for Point Cloud Self-supervised Representation Learning

TL;DR: Implicit Autoencoder (IAE) is introduced, a simple yet effective method that addresses the challenge of autoencoding on point clouds by replacing the point cloud decoder with an implicit decoder that outputs a continuous representation that is shared among different point cloud sampling of the same model.
Proceedings ArticleDOI

Extreme Relative Pose Network Under Hybrid Representations

TL;DR: A novel RGB-D based relative pose estimation approach that is suitable for small-overlapping or non- overlapping scans and can output multiple relative poses and considerably boosts the performance of multi-scan reconstruction in few-view reconstruction settings.
Proceedings ArticleDOI

Extreme Relative Pose Estimation for RGB-D Scans via Scene Completion

TL;DR: In this article, a novel approach that extends the scope to extreme relative poses, with little or even no overlap between the input scans, is proposed. But this approach requires a large amount of overlap between two RGB-D scans of the same underlying environment.