scispace - formally typeset
Search or ask a question
Institution

Beijing Film Academy

EducationBeijing, China
About: Beijing Film Academy is a education organization based out in Beijing, China. It is known for research contribution in the topics: Augmented reality & Virtual reality. The organization has 70 authors who have published 105 publications receiving 336 citations. The organization is also known as: BFA & Běijīng Diànyǐng Xuéyuàn.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: A new video‐based performance cloning technique after training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, which is able to generate videos where this actor reenacts other performances.
Abstract: We present a new video-based performance cloning technique. After training a deep generative network using a reference video capturing the appearance and dynamics of a target actor, we are able to generate videos where this actor reenacts other performances. All of the training data and the driving performances are provided as ordinary video segments, without motion capture or depth information. Our generative model is realized as a deep neural network with two branches, both of which train the same space-time conditional generator, using shared weights. One branch, responsible for learning to generate the appearance of the target actor in various poses, uses \emph{paired} training data, self-generated from the reference video. The second branch uses unpaired data to improve generation of temporally coherent video renditions of unseen pose sequences. We demonstrate a variety of promising results, where our method is able to generate temporally coherent videos, for challenging scenarios where the reference and driving videos consist of very different dance performances. Supplementary video: this https URL.

74 citations

Journal ArticleDOI
TL;DR: In this paper, a soft, deformable and ultra-high-performance textile strain sensors are fabricated by directly stencil printing silver ink on pre-stretched textiles towards HMIs.
Abstract: Gesture control is an emerging technological goal in the field of human–machine interfaces (HMIs) Optical fibers or metal strain sensors as sensing elements are generally complex and not sensitive enough to accurately capture gestures, and thus there is a need for additional complicated signal optimization Electronic sensing textiles hold great promise for the next generation of wearable electronics Here, soft, deformable and ultrahigh-performance textile strain sensors are fabricated by directly stencil printing silver ink on pre-stretched textiles towards HMIs These textile strain sensors exhibit ultrahigh sensitivity (a gauge factor of ∼2000), stretchability (up to 60% strain), and durability (>10 000 stretching cycles) Through a simple auxiliary signal processing circuit with Bluetooth communication technology, an intelligent glove assembled with these textile strain sensors is prepared, which is capable of detecting the full range of fingers’ bending and can translate the fingers’ bending into wireless control commands Immediate applications, for example, as a smart car director, for wireless typing, and as a remote PowerPoint controller, bring out the great practical value of these textile strain sensors in the field of wearable electronics This work provides a new prospective for achieving wearable sensing electronic textiles with ultrahigh performance towards HMIs, and will further expand their impact in the field of the Internet of Things

48 citations

Journal ArticleDOI
TL;DR: In this article, a hetero-contact microstructure (HeCM) is proposed to fabricate tactile sensor by using silver nanowires@polyurethane scaffold combined with layered carbon fabric.

44 citations

Journal ArticleDOI
TL;DR: MotioNet as discussed by the authors proposes a deep neural network with embedded kinematic priors, which decomposes sequences of 2D joint positions into two separate attributes: a single, symmetric skeleton encoded by bone lengths, and a sequence of 3D joint rotations associated with global root positions and foot contact labels.
Abstract: We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from a monocular video. While previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used motion representation. At the crux of our approach lies a deep neural network with embedded kinematic priors, which decomposes sequences of 2D joint positions into two separate attributes: a single, symmetric skeleton encoded by bone lengths, and a sequence of 3D joint rotations associated with global root positions and foot contact labels. These attributes are fed into an integrated forward kinematics (FK) layer that outputs 3D positions, which are compared to a ground truth. In addition, an adversarial loss is applied to the velocities of the recovered rotations to ensure that they lie on the manifold of natural joint rotations. The key advantage of our approach is that it learns to infer natural joint rotations directly from the training data rather than assuming an underlying model, or inferring them from joint positions using a data-agnostic IK solver. We show that enforcing a single consistent skeleton along with temporally coherent joint rotations constrains the solution space, leading to a more robust handling of self-occlusions and depth ambiguities.

37 citations

Posted Content
TL;DR: MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video, is introduced, the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation.
Abstract: We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular videoWhile previous methods rely on either rigging or inverse kinematics (IK) to associate a consistent skeleton with temporally coherent joint rotations, our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation At the crux of our approach lies a deep neural network with embedded kinematic priors, which decomposes sequences of 2D joint positions into two separate attributes: a single, symmetric, skeleton, encoded by bone lengths, and a sequence of 3D joint rotations associated with global root positions and foot contact labels These attributes are fed into an integrated forward kinematics (FK) layer that outputs 3D positions, which are compared to a ground truth In addition, an adversarial loss is applied to the velocities of the recovered rotations, to ensure that they lie on the manifold of natural joint rotations The key advantage of our approach is that it learns to infer natural joint rotations directly from the training data, rather than assuming an underlying model, or inferring them from joint positions using a data-agnostic IK solver We show that enforcing a single consistent skeleton along with temporally coherent joint rotations constrains the solution space, leading to a more robust handling of self-occlusions and depth ambiguities

33 citations


Authors

Showing all 70 results

NameH-indexPapersCitations
Daniel Cohen-Or9544831871
Baoquan Chen502589315
Yongtian Wang302624103
Yongtian Wang273573010
Tingting Jiang20901628
Yue Liu162001136
Dongdong Weng11125609
Kfir Aberman1123336
Shuwu Zhang990328
Wujun Che823170
Weitao Song722132
Xinxin Zhang69146
Mingyi Shi610195
Jie Liu62397
Yue Liu5850
Network Information
Related Institutions (5)
Hunan University of Technology
3.5K papers, 34.8K citations

67% related

College of Information Technology
4.5K papers, 41.9K citations

66% related

Shanghai University
56.8K papers, 753.5K citations

64% related

Hunan Normal University
12.6K papers, 180.8K citations

63% related

Jishou University
2.1K papers, 25.5K citations

62% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20221
202123
202029
201930
201816
20171