S
Siming Yan
Researcher at University of Texas at Austin
Publications - 13
Citations - 328
Siming Yan is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 4, co-authored 10 publications receiving 85 citations. Previous affiliations of Siming Yan include Peking University & Cedars-Sinai Medical Center.
Papers
More filters
Journal ArticleDOI
Unsupervised neural network models of the ventral visual stream
Chengxu Zhuang,Siming Yan,Aran Nayebi,Martin Schrimpf,Michael C. Frank,James J. DiCarlo,Daniel L. K. Yamins +6 more
TL;DR: Recently, this article showed that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today's best supervised methods and that the mapping of neural network hidden layers is neuroanatomically consistent across the ventral stream.
Posted ContentDOI
Unsupervised Neural Network Models of the Ventral Visual Stream
Chengxu Zhuang,Siming Yan,Aran Nayebi,Martin Schrimpf,Michael C. Frank,James J. DiCarlo,Daniel L. K. Yamins +6 more
TL;DR: It is found that neural network models learned with deep unsupervised contrastive embedding methods achieve neural prediction accuracy in multiple ventral visual cortical areas that equals or exceeds that of models derived using today’s best supervised methods.
Implicit Autoencoder for Point Cloud Self-supervised Representation Learning
TL;DR: Implicit Autoencoder (IAE) is introduced, a simple yet effective method that addresses the challenge of autoencoding on point clouds by replacing the point cloud decoder with an implicit decoder that outputs a continuous representation that is shared among different point cloud sampling of the same model.
Proceedings ArticleDOI
Extreme Relative Pose Network Under Hybrid Representations
TL;DR: A novel RGB-D based relative pose estimation approach that is suitable for small-overlapping or non- overlapping scans and can output multiple relative poses and considerably boosts the performance of multi-scan reconstruction in few-view reconstruction settings.
Posted Content
HPNet: Deep Primitive Segmentation Using Hybrid Representations
TL;DR: HPNet as discussed by the authors leverages hybrid representations that combine one learned semantic descriptor, two spectral descriptors derived from predicted geometric parameters, as well as an adjacency matrix that encodes sharp edges.