Institution
Beijing Film Academy
Education•Beijing, China•
About: Beijing Film Academy is a education organization based out in Beijing, China. It is known for research contribution in the topics: Augmented reality & Virtual reality. The organization has 70 authors who have published 105 publications receiving 336 citations. The organization is also known as: BFA & Běijīng Diànyǐng Xuéyuàn.
Topics: Augmented reality, Virtual reality, Computer science, Narrative, Rendering (computer graphics)
Papers
More filters
••
TL;DR: A new video‐based performance cloning technique after training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, which is able to generate videos where this actor reenacts other performances.
Abstract: We present a new video-based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space-time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses \emph{paired} training data, self-generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances. Supplementary video: this https URL.
74 citations
••
TL;DR: In this paper, a soft, deformable and ultra-high-performance textile strain sensors are fabricated by directly stencil printing silver ink on pre-stretched textiles towards HMIs.
Abstract: Gesture control is an emerging technological goal in the field of human–machine interfaces (HMIs) Optical fibers or metal strain sensors as sensing elements are generally complex and not sensitive enough to accurately capture gestures, and thus there is a need for additional complicated signal optimization Electronic sensing textiles hold great promise for the next generation of wearable electronics Here, soft, deformable and ultrahigh-performance textile strain sensors are fabricated by directly stencil printing silver ink on pre-stretched textiles towards HMIs These textile strain sensors exhibit ultrahigh sensitivity (a gauge factor of ∼2000), stretchability (up to 60% strain), and durability (>10 000 stretching cycles) Through a simple auxiliary signal processing circuit with Bluetooth communication technology, an intelligent glove assembled with these textile strain sensors is prepared, which is capable of detecting the full range of fingers’ bending and can translate the fingers’ bending into wireless control commands Immediate applications, for example, as a smart car director, for wireless typing, and as a remote PowerPoint controller, bring out the great practical value of these textile strain sensors in the field of wearable electronics This work provides a new prospective for achieving wearable sensing electronic textiles with ultrahigh performance towards HMIs, and will further expand their impact in the field of the Internet of Things
48 citations
••
TL;DR: In this article, a hetero-contact microstructure (HeCM) is proposed to fabricate tactile sensor by using silver nanowires@polyurethane scaffold combined with layered carbon fabric.
44 citations
••
TL;DR: MotioNet as discussed by the authors proposes a deep neural network with embedded kinematic priors, which decomposes sequences of 2D joint positions into two separate attributes: a single, symmetric skeleton encoded by bone lengths, and a sequence of 3D joint rotations associated with global root positions and foot contact labels.
Abstract: We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from a monocular video. While previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used motion representation. At the crux of our approach lies a deep neural network with embedded kinematic priors, which decomposes sequences of 2D joint positions into two separate attributes: a single, symmetric skeleton encoded by bone lengths, and a sequence of 3D joint rotations associated with global root positions and foot contact labels. These attributes are fed into an integrated forward kinematics (FK) layer that outputs 3D positions, which are compared to a ground truth. In addition, an adversarial loss is applied to the velocities of the recovered rotations to ensure that they lie on the manifold of natural joint rotations. The key advantage of our approach is that it learns to infer natural joint rotations directly from the training data rather than assuming an underlying model, or inferring them from joint positions using a data-agnostic IK solver. We show that enforcing a single consistent skeleton along with temporally coherent joint rotations constrains the solution space, leading to a more robust handling of self-occlusions and depth ambiguities.
37 citations
•
TL;DR: MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video, is introduced, the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation.
Abstract: We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular videoWhile previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation At the crux of our approach lies a deep neural network with embedded kinematic priors, which decomposes sequences of 2D joint positions into two separate attributes: a single, symmetric, skeleton, encoded by bone lengths, and a sequence of 3D joint rotations associated with global root positions and foot contact labels These attributes are fed into an integrated forward kinematics (FK) layer that outputs 3D positions, which are compared to a ground truth In addition, an adversarial loss is applied to the velocities of the recovered rotations, to ensure that they lie on the manifold of natural joint rotations The key advantage of our approach is that it learns to infer natural joint rotations directly from the training data, rather than assuming an underlying model, or inferring them from joint positions using a data-agnostic IK solver We show that enforcing a single consistent skeleton along with temporally coherent joint rotations constrains the solution space, leading to a more robust handling of self-occlusions and depth ambiguities
33 citations
Authors
Showing all 70 results
Name | H-index | Papers | Citations |
---|---|---|---|
Daniel Cohen-Or | 95 | 448 | 31871 |
Baoquan Chen | 50 | 258 | 9315 |
Yongtian Wang | 30 | 262 | 4103 |
Yongtian Wang | 27 | 357 | 3010 |
Tingting Jiang | 20 | 90 | 1628 |
Yue Liu | 16 | 200 | 1136 |
Dongdong Weng | 11 | 125 | 609 |
Kfir Aberman | 11 | 23 | 336 |
Shuwu Zhang | 9 | 90 | 328 |
Wujun Che | 8 | 23 | 170 |
Weitao Song | 7 | 22 | 132 |
Xinxin Zhang | 6 | 9 | 146 |
Mingyi Shi | 6 | 10 | 195 |
Jie Liu | 6 | 23 | 97 |
Yue Liu | 5 | 8 | 50 |