scispace - formally typeset
Y

Yunzhu Li

Researcher at Massachusetts Institute of Technology

Publications -  65
Citations -  2366

Yunzhu Li is an academic researcher from Massachusetts Institute of Technology. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 18, co-authored 32 publications receiving 1220 citations. Previous affiliations of Yunzhu Li include Peking University.

Papers
More filters
Journal ArticleDOI

Learning the signatures of the human grasp using a scalable tactile glove.

TL;DR: Tactile patterns obtained from a scalable sensor-embedded glove and deep convolutional neural networks help to explain how the human hand can identify and grasp individual objects and estimate their weights.
Proceedings Article

InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations

TL;DR: A new algorithm is proposed that can infer the latent structure of expert demonstrations in an unsupervised way, built on top of Generative Adversarial Imitation Learning, and can not only imitate complex behaviors, but also learn interpretable and meaningful representations of complex behavioral data, including visual demonstrations.
Posted Content

CLEVRER: CoLlision Events for Video REpresentation and Reasoning

TL;DR: This work introduces the CoLlision Events for Video REpresentation and Reasoning (CLEVRER), a diagnostic video dataset for systematic evaluation of computational models on a wide range of reasoning tasks, and evaluates various state-of-the-art models for visual reasoning on a benchmark.
Book ChapterDOI

Face Detection with End-to-End Integration of a ConvNet and a 3D Model

TL;DR: This paper presents a method for face detection in the wild, which integrates a ConvNet and a 3D mean face model in an end-to-end multi-task discriminative learning framework and addresses two issues in adapting state-of-the-art generic object detection ConvNets.
Journal ArticleDOI

Learning human–environment interactions using conformal tactile textiles

TL;DR: A textile-based tactile learning platform that can be used to record, monitor and learn human–environment interactions and it is shown that the artificial-intelligence-powered sensing textiles can classify humans’ sitting poses, motions and other interactions with the environment.