D
Dongdong Weng
Researcher at Beijing Institute of Technology
Publications - 146
Citations - 934
Dongdong Weng is an academic researcher from Beijing Institute of Technology. The author has contributed to research in topics: Augmented reality & Computer science. The author has an hindex of 11, co-authored 125 publications receiving 609 citations. Previous affiliations of Dongdong Weng include Beijing Film Academy.
Papers
More filters
Journal ArticleDOI
Teaching based on augmented reality for a technical creative design course
TL;DR: The results of a pilot study show that the proposed teaching scheme significantly improves learning motivation, student creativity, and the teaching of creative design.
Proceedings ArticleDOI
Mixed Reality Office System Based on Maslow’s Hierarchy of Needs: Towards the Long-Term Immersion in Virtual Environments
Jie Guo,Dongdong Weng,Zhenliang Zhang,Haiyan Jiang,Yue Liu,Yongtian Wang,Henry Been-Lirn Duh +6 more
TL;DR: The results showed that the design based on the theory of Maslow’s Hierarchy of Needs can support users’ long-term immersion, which means that it can be a guideline for long- term use of MR systems.
Proceedings ArticleDOI
Comparison in Depth Perception between Virtual Reality and Augmented Reality Systems
TL;DR: An experiment to compare users' performance of depth perception in VR and AR system using an optical see-through head-mounted display (HMD) shows that the accuracy of depth estimation in AR is higher than in VR.
Journal ArticleDOI
Effects of shading model and opacity on depth perception in optical see-through augmented reality
Jiamin Ping,Bruce H. Thomas,James Baumeister,Jie Guo,Dongdong Weng,Dongdong Weng,Yue Liu,Yue Liu +7 more
TL;DR: The results showed that shading models with specular highlights could help to improve depth perception in AR and users had the lowest matching error when the opacity of a virtual object was 0.8.
Journal ArticleDOI
HiFinger: One-Handed Text Entry Technique for Virtual Environments Based on Touches between Fingers
TL;DR: HiFinger is an eyes-free, one-handed wearable text entry technique for immersive virtual environments by thumb-to-fingers touch that enables users to input text quickly, accurately, and comfortably with the sense of touch and a two-step input mode.