scispace - formally typeset
Search or ask a question

Showing papers in "Computer Animation and Virtual Worlds in 2021"


Journal ArticleDOI
TL;DR: A simplified force‐based heterogeneous traffic simulation model to facilitate consistent adjustment of the parameters involved, which is also scalable to new types of road users, and facilitates an object‐oriented implementation with high performance.
Abstract: We present a simplified force‐based heterogeneous traffic simulation model to facilitate consistent adjustment of the parameters involved. Different from previous work which requires the adjustment of multiple ad hoc parameters to produce satisfactory results, our approach can achieve similar results by using clear and meaningful parameters to simulate interactions between various kinds of road users. To simulate diverse and realistic motions of road users, we parameterize the coefficients of the force model for better detailed motion control. Our approach is also scalable to new types of road users, and facilitates an object‐oriented implementation with high performance. We validate our framework with extensive experiments.

16 citations




Journal ArticleDOI
TL;DR: This study proposes simple, highly immersive x‐person asymmetric interactions that account for the experience type characteristics of asymmetric virtual environments, jointly experienced by virtual reality (VR) users and augmented reality (AR) users.
Abstract: This study proposes simple, highly immersive x‐person asymmetric interactions that account for the experience type characteristics of asymmetric virtual environments, jointly experienced by virtual reality (VR) users and augmented reality (AR) users. The first person interactions for VR users are performed through the use of hand gestures, and they define a manipulation process that maps the gestures and object control scheme to provide intuitive interactions with the virtual environment and objects. The third person interaction for AR users is designed to view the overall virtual scene and recognize and judge situations to allow for intuitive communication and interactions among the virtual environment, objects, and users based on a touch interface. The core goal of this process is to provide all users who participate in asymmetric virtual environments with satisfying experiences and presence through individualized experience modes and roles. To this end, an application that uses the x‐person asymmetric interactions was created. Furthermore, a survey experiment is performed to statistically analyze the interactions and verify that they provided users with a satisfactory experience, that is, a satisfactory sense of presence and social presence in each user's situation.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the authors conducted a systematic literature review consisting of N=61 papers published in the year 2020 that focused on AR/VR in the education sector, where studies have evaluated user perceptions in different countries, academic fields, and at varied educational levels.
Abstract: Research is increasingly being conducted to identify the benefits provided by the latest developments in the AR/VR domain, which has seen an increase in interest as a result of the stay-at-home phenomena in 2020. Of particular interest is the application of AR/VR to education, a discipline that has seen a rapid shift to online modules in 2020. To better understand the advancements in AR/VR enabled education, we conducted a systematic literature review consisting of N=61 papers published in the year 2020 that focused on AR/VR in the education sector. We particularly focused on papers where studies have evaluated user perceptions in different countries, academic fields, and at varied educational levels. We found that while most papers conducted user studies and evaluated the technical applications of AR/VR, user perceptions, impact, and awareness were not explored in detail. Our findings highlight trends that can drive critically needed innovations through AR/VR especially to help a globalized digital evolution in the education sector.

8 citations



Journal ArticleDOI
TL;DR: The related research on CD of deformable objects regarding relevant literature is briefly reviewed and can be used as a reference for the application of CD in all directions.
Abstract: In the process of simulating and modeling real objects, the phenomenon of objects penetrating each other may occur in the model, which is unrealistic and then the research of collision detection (CD) is generated. As a bottleneck of virtual environment simulation, researchers have conducted in‐depth research on CD, especially the CD of deformable objects. In this paper, we briefly review the related research on CD of deformable objects regarding relevant literature. First, we briefly introduce previous reviews of CD. Second, we review the popular research methods and limitations of CD between deformable objects. Third, we review the popular research methods and limitations of self‐collision detection in deformable objects. Finally, we discuss future directions of development. This review can be used as a reference for the application of CD in all directions.

7 citations



Journal ArticleDOI
TL;DR: A visualization‐driven approach for analyzing dance videos that calculates a skeleton structure for pose estimation with enhanced post‐processing to help capture dance moves and an interactive visualization tool enables users and domain experts to interactively analyze the quality of dance moves along the time line is proposed.
Abstract: Processing and analyzing dance videos are important in the application of online cheer leading and dance training for physical coordination measurement. However, it is challenging for users to evaluate a massive amount of uploaded video, to precisely quantize and compare dance moves, and to visualize training results. To overcome these challenges, we propose a visualization‐driven approach for analyzing dance videos. We first encode extracted video frames into a set of heat maps via neural network, which calculates a skeleton structure for pose estimation with enhanced post‐processing to help capture dance moves. A subsequent pose similarity method allows users to quantize differences between student training videos and the standard one. Finally, an interactive visualization tool enables users and domain experts to interactively analyze the quality of dance moves along the time line. We demonstrate the applicability and effectiveness of our proposed tool using case studies involving physical coordination research.

6 citations



Journal ArticleDOI
TL;DR: This paper presents a unified framework to simulate surface and wave foams efficiently and realistically and proposes a kernel technique using the screen space density to reduce redundant foam particles efficiently, resulting in improved overall memory efficiency without loss of visual detail in terms of foam effects.
Abstract: This paper presents a unified framework to simulate surface and wave foams efficiently and realistically. The framework is designed first to project thee‐dimensional (3D) water particles from an underlying water solver onto two‐dimensional screen space to reduce the computational complexity of determining where foam particles should be generated. Because foam effects are often created primarily in fast and complicated water flows, we analyze the acceleration and curvature values to identify the areas exhibiting such flow patterns. Foam particles are emitted from the identified areas in 3D space, and each foam particle is advected according to its type, which is classified on the basis of velocity, thereby capturing the essential characteristics of foam wave motions. We improve the realism of the resulting foam by classifying it into two types: surface foam and wave foam. Wave foam is characterized by the sharp wave patterns of torrential flows, and surface foam is characterized by a cloudy foam shape, even in water with reduced motion. Based on these features, we propose a technique to correct the velocity and position of a foam particle. In addition, we propose a kernel technique using the screen space density to reduce redundant foam particles efficiently, resulting in improved overall memory efficiency without loss of visual detail in terms of foam effects. Experiments convincingly demonstrate that the proposed approach is efficient and easy to use while delivering high‐quality results.


Journal ArticleDOI
TL;DR: The researchers created a virtual environment (VE) for use in Southern Africa, where students could practice managing a young adult with a foreign object in the airway to determine whether a viable, “home‐made” solution could be created which could also be expanded later on to incorporate more scenarios.
Abstract: Virtual reality (VR) is becoming ever more used within the field of education. During this study, the researchers created a virtual environment (VE) for use in Southern Africa, where students could practice managing a young adult with a foreign object in the airway. The aim of the VE was to determine whether a viable, “home‐made” solution could be created which could also be expanded later on to incorporate more scenarios. This was due to the expensive nature of existing systems for virtual clinical simulation. To determine whether the VE is usable, two expert review panels assisted in testing the VE. The first‐panel being Computer Science experts and the second Health Science (HS) experts. Each panel evaluated the environment and the scenario using heuristic evaluation and cognitive walkthroughs. The recommendations made during each of the expert reviews were implemented to improve the VE, thus enabling students to experience an accurate, virtual scenario that could positively influence their learning experience. The findings and recommendations made during the expert reviews are presented in this paper to assist in improving future developments within the field of VR in HS education, especially for developing countries in Africa.

Journal ArticleDOI
TL;DR: This work uses flexible sensors to track human posture and achieves the goal of user authentication by introducing the long short‐term memory fully convolutional network (LSTM‐FCN), which directly takes noisy and sparse sensor data as input and verifies its consistency with the user's predefined movement patterns.
Abstract: The integration of conventional clothes with flexible electronics is a promising solution as a future‐generation computing platform However, the problem of user authentication on this novel platform is still underexplored This work uses flexible sensors to track human posture and achieves the goal of user authentication We capture human movement pattern by four stretch sensors around the shoulder and one on the elbow We introduce the long short‐term memory fully convolutional network (LSTM‐FCN), which directly takes noisy and sparse sensor data as input and verifies its consistency with the user's predefined movement patterns The method can identify a user by matching movement patterns even if there are large intrapersonal variations The authentication accuracy of LSTM‐FCN reaches 980%, which is 107% and 65% higher than that of dynamic time warping and dynamic time warping dependent

Journal ArticleDOI
TL;DR: A preliminary usability evaluation in university classrooms shows that the teaching system in this paper can achieve a better interactive experience.
Abstract: The quality and efficiency of aircraft maintenance are the key to ensure flight safety and on‐time rate, which mainly depend on the techniques and experience of maintenance engineer. Generally, exercises on physical prototypes are used to improve the maintenance capability of engineers, but this will waste a lot of consumables and easily cause safety accidents. With the development of computer technology, maintenance training in a virtual environment has become an advanced and reliable solution. In this paper, a virtual training system of aircraft maintenance based on gesture recognition interaction is established. Leap Motion is used as a sensor to construct a hybrid machine learning gesture recognition model, so as to obtain natural human–computer interaction experience. In the recognition model, the initial weight matrix and the number of hidden layer nodes in the back propagation neural network are jointly optimized by the Particle Swarm Optimization algorithm with self‐adaption inertial weight. This optimization algorithm achieved a recognition rate of 81.26% in the dynamic gesture database constructed in this paper, which is higher than other available algorithms. A preliminary usability evaluation in university classrooms shows that the teaching system in this paper can achieve a better interactive experience.

Journal ArticleDOI
TL;DR: An approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow and has excellent generative properties and allows the user intuitive local control.
Abstract: We propose an approach to construct realistic 3D facial morphable models (3DMM) that allows an intuitive facial attribute editing workflow. Current face modeling methods using 3DMM suffer from a lack of local control. We thus create a 3DMM by combining local part‐based 3DMM for the eyes, nose, mouth, ears, and facial mask regions. Our local principal component analysis (PCA)‐based approach uses a novel method to select the best eigenvectors from the local 3DMM to ensure that the combined 3DMM is expressive, while allowing accurate reconstruction. We provide different editing paradigms, all designed from the analysis of the data set. Some use anthropometric measurements from the literature and others allow the user to control the dominant modes of variation extracted from the data set. Our part‐based 3DMM is compact, yet accurate, and compared to other 3DMM methods, it provides a new trade‐off between local and global control. We tested our approach on a data set of 135 scans used to derive the 3DMM, plus 19 scans that served for validation. The results show that our part‐based 3DMM approach has excellent generative properties and allows the user intuitive local control.


Journal ArticleDOI
TL;DR: A smartphone AR application, named the AR‐E‐Helper, which assists the learning of students in higher education lectures and is helpful in maintaining student's focus in class, promoting their interest, and increasing their satisfaction is presented.
Abstract: The rapid evolution of augmented reality (AR) technology has presented new opportunities in the domain of education. Acting as a bridge between the virtual and real worlds, AR technology overcomes the physical limitations of our classrooms at a low cost and provides an interactive learning experience. In this study, we present a smartphone AR application, named the AR‐E‐Helper, which assists the learning of students in higher education lectures. Our goal is to provide an AR‐enhanced learning experience for students. To validate the effectiveness of the AR‐E‐Helper, we conducted an experiment that compares three classes: AR‐enhanced, smartphone‐enhanced, and nontechnology‐enhanced classes. Through the experiment, we observed that our application was helpful in maintaining student's focus in class, promoting their interest, and increasing their satisfaction. Furthermore, we also found how to improve our application based on the observations that the application brought some downsides to the learning activities. We expect that this study will be helpful to design AR learning tools in the future.

Journal ArticleDOI
TL;DR: This paper designs a fully automatic interactive virtual agent able to display these movements in response to the bodily movements of the user and explains how it was inspired by video recordings of human interviewers to build a library of motion‐captured movements that interviewers are most likely to display.
Abstract: Postural interaction is of major importance during job interviews. While several prototypes enable users to rehearse for public speaking tasks and job interviews, few of these prototypes support subtle bodily interactions between the user and a virtual agent playing the role of an interviewer. The design of our system is informed by a multimodal corpus that was previously collected. In this paper, we explain how we were inspired by these video recordings of human interviewers to build a library of motion‐captured movements that interviewers are most likely to display. We designed a fully automatic interactive virtual agent able to display these movements in response to the bodily movements of the user. Thirty‐two participants presented themselves to this virtual agent during a simulated job interview. We focused on the self‐presentation task of the job interview, the virtual agent being listening. Participants stood on a force platform that recorded the displacements of their center of pressure to assess the postural impact of our design. We also collected video recordings of their movements and computed the contraction index and the quantity of motion of their bodies. We explain the different hypotheses that we made concerning (1) the comparison between the performance of participants with human interviewers and the performance of participants with virtual interviewers, (2) the comparison between mirror and random postural behaviors displayed by a female versus a male virtual interviewer, and (3) the correlation between the participants' performance and their personality traits. Our results suggest that users perceive the simulated self‐presentation task with the virtual interviewer as threatening and as difficult as the presentation task with the human interviewers. Furthermore, when users interact with a virtual interviewer that mirrors their postures, these users perceive the interviewer as being affiliative. Finally, a correlation analysis showed that personality traits had a significant relation to the postural behaviors and performance of the users during their presentation.



Journal ArticleDOI
Jie Liao1, Mengqiang Wei, Yanping Fu1, Qingan Yan, Chunxia Xiao1 
TL;DR: Quantitative and qualitative comparisons demonstrate that the proposed MVS method can significantly improve the quality of reconstruction in low‐textured regions and validate the effectiveness of the method on the ETH3D benchmark.
Abstract: In this paper, we propose a novel Multiview Stereo (MVS) method which can effectively estimate geometry in low‐textured regions. Conventional MVS algorithms predict geometry by performing dense correspondence estimation across multiple views under the constraint of epipolar geometry. As low‐textured regions contain less feature information for reliable matching, estimating geometry for low‐textured regions remains hard work for previous MVS methods. To address this issue, we propose an MVS method based on texture enhancement. By enhancing texture information for each input image via our multiscale bilateral decomposition and reconstruction algorithm, our method can estimate reliable geometry for low‐textured regions that are intractable for previous MVS methods. To densify the final output point cloud, we further propose a novel selective joint bilateral propagation filter, which can effectively propagate reliable geometry estimation to neighboring unpredicted regions. We validate the effectiveness of our method on the ETH3D benchmark. Quantitative and qualitative comparisons demonstrate that our method can significantly improve the quality of reconstruction in low‐textured regions.






Journal ArticleDOI
TL;DR: A new keyframe extraction algorithm is proposed, which reduces the keyframe redundancy and reduces the motion sequence reconstruction error and a new motion sequence recovery method is proposed that further reduces the error of motion sequence reconstructing.
Abstract: In this paper, we make two contributions. The first is to propose a new keyframe extraction algorithm, which reduces the keyframe redundancy and reduces the motion sequence reconstruction error. Secondly, a new motion sequence reconstruction method is proposed, which further reduces the error of motion sequence reconstruction. Specifically, we treated the input motion sequence as curves, then the binomial fitting was extended to obtain the points where the slope changes dramatically in the vicinity. Then we took these points as inputs to obtain keyframes by density clustering. Finally, the motion curves were segmented by keyframes and the segmented curves were fitted by binomial formula again to obtain the binomial parameters for motion reconstruction. Experiments show that our methods outperform existing techniques, in terms of reconstruction error.

Journal ArticleDOI
TL;DR: A novel approach for reconstructing plausible three‐dimensional (3D) human body models from small number of 3D points which represent body parts with the help of Laplacian deformation using a small database.
Abstract: We propose a novel approach for reconstructing plausible three‐dimensional (3D) human body models from small number of 3D points which represent body parts. We leverage a database of 3D models of humans varying from each other by physical attributes such as age, gender, weight, and height. First we divide the bodies in database into seven semantic regions. Then, for each input region consisting of maximum 40 points, we search the database for the best matching body part. For the matching criterion, we use the distance between novel point‐based features of input points and body parts in the database. We then combine the matched parts from different bodies into one body, with the help of Laplacian deformation, which results in a plausible human body. To evaluate our results objectively, we pick points from each part of the ground‐truth human body models, then reconstruct them using our method and compare the resulting bodies with the corresponding ground‐truths. Also, our results are compared with registration‐based results. In addition, we run our algorithm with noisy data to test the robustness of our method and run it with input points whose body parts are manually edited, which produces plausible human bodies that do not even exist in our database. Our experiments verify qualitatively and quantitatively that the proposed approach reconstructs human bodies with different physical attributes from a small number of points using a small database.