scispace - formally typeset
Search or ask a question

Showing papers on "Motion analysis published in 2017"


Proceedings ArticleDOI
03 Apr 2017
TL;DR: DeepSense as discussed by the authors integrates convolutional and recurrent neural networks to exploit local interactions among similar mobile sensors, merge local interactions of different sensory modalities into global interactions, and extract temporal relationships to model signal dynamics.
Abstract: Mobile sensing and computing applications usually require time-series inputs from sensors, such as accelerometers, gyroscopes, and magnetometers. Some applications, such as tracking, can use sensed acceleration and rate of rotation to calculate displacement based on physical system models. Other applications, such as activity recognition, extract manually designed features from sensor inputs for classification. Such applications face two challenges. On one hand, on-device sensor measurements are noisy. For many mobile applications, it is hard to find a distribution that exactly describes the noise in practice. Unfortunately, calculating target quantities based on physical system and noise models is only as accurate as the noise assumptions. Similarly, in classification applications, although manually designed features have proven to be effective, it is not always straightforward to find the most robust features to accommodate diverse sensor noise patterns and heterogeneous user behaviors. To this end, we propose DeepSense, a deep learning framework that directly addresses the aforementioned noise and feature customization challenges in a unified manner. DeepSense integrates convolutional and recurrent neural networks to exploit local interactions among similar mobile sensors, merge local interactions of different sensory modalities into global interactions, and extract temporal relationships to model signal dynamics. DeepSense thus provides a general signal estimation and classification framework that accommodates a wide range of applications. We demonstrate the effectiveness of DeepSense using three representative and challenging tasks: car tracking with motion sensors, heterogeneous human activity recognition, and user identification with biometric motion analysis. DeepSense significantly outperforms the state-of-the-art methods for all three tasks. In addition, we show that DeepSense is feasible to implement on smartphones and embedded devices thanks to its moderate energy consumption and low latency.

538 citations


Journal ArticleDOI
TL;DR: It is shown that Kinect v2 can be used as a reliable tool to measure shoulder ROM and arm motion Smoothness and proposed a concept of motion smoothness reflecting the quality of arm motion.

36 citations


Journal ArticleDOI
TL;DR: The overall findings indicate that the developed method is effective at reconstructing dynamic structural time histories, though the choice of optical flow algorithm plays a significant role in the overall performance.
Abstract: Summary After any disaster, there is an immediate need to assess the integrity of local structures. When available, the displacement time history of a structure during the event can provide an invaluable source of triage assessment information. Although conventional sensors such as accelerometers readily provide this information, many structures are not instrumented and in these cases an alternative is needed. This paper presents such an alternative: a flexible, low-cost, and target-free approach to extracting motion time histories from video recordings of structures during an event. The approach is designed for scenarios where video recordings have inadvertently captured a dynamic event, with the goal of repurposing them for structural triage assessment through a combination of computer vision and signal processing techniques. A combination of parametric video stabilization, 3D denoising, and outlier robust camera motion estimation are employed to mitigate of the effects of camera motion and video encoding artifacts. The approach leverages the computer vision concept of optical flow to provide motion estimates, and 4 canonical optical flow algorithms are assessed as part of this study. The developed approach was validated on the records of the Network for Earthquake Simulation database. The overall findings indicate that the developed method is effective at reconstructing dynamic structural time histories, though the choice of optical flow algorithm plays a significant role in the overall performance. In particular, any employed optical flow algorithm must not overpenalize the high motion gradients that occur at the boundary of in-motion buildings and the image background.

27 citations


Journal ArticleDOI
TL;DR: A novel theoretical model for speckle movement due to multi-object motion is developed, and a simple technique based on global scale-space speckingle motion analysis is presented for measuring small (5--50 microns) compound motion of multiple objects, along all three axes.
Abstract: We present CoLux, a novel system for measuring micro 3D motion of multiple independently moving objects at macroscopic standoff distances. CoLux is based on speckle imaging, where the scene is illuminated with a coherent light source and imaged with a camera. Coherent light, on interacting with optically rough surfaces, creates a high-frequency speckle pattern in the captured images. The motion of objects results in movement of speckle, which can be measured to estimate the object motion. Speckle imaging is widely used for micro-motion estimation in several applications, including industrial inspection, scientific imaging, and user interfaces (e.g., optical mice). However, current speckle imaging methods are largely limited to measuring 2D motion (parallel to the sensor image plane) of a single rigid object. We develop a novel theoretical model for speckle movement due to multi-object motion, and present a simple technique based on global scale-space speckle motion analysis for measuring small (5--50 microns) compound motion of multiple objects, along all three axes. Using these tools, we develop a method for measuring 3D micro-motion histograms of multiple independently moving objects, without tracking the individual motion trajectories. In order to demonstrate the capabilities of CoLux, we develop a hardware prototype and a proof-of-concept subtle hand gesture recognition system with a broad range of potential applications in user interfaces and interactive computer graphics.

26 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work proposes an algorithm combining monocular 3D pose estimation with physics-based modeling to introduce a statistical framework for fast and robust 3D motion analysis from 2D video-data.
Abstract: Motion analysis is often restricted to a laboratory setup with multiple cameras and force sensors which requires expensive equipment and knowledgeable operators. Therefore it lacks in simplicity and flexibility. We propose an algorithm combining monocular 3D pose estimation with physics-based modeling to introduce a statistical framework for fast and robust 3D motion analysis from 2D video-data. We use a factorization approach to learn 3D motion coefficients and join them with physical parameters, that describe the dynamic of a mass-spring-model. Our approach does neither require additional force measurement nor torque optimization and only uses a single camera while allowing to estimate unobservable torques in the human body. We show that our algorithm improves the monocular 3D reconstruction by enforcing plausible human motion and resolving the ambiguity of camera and object motion.,,,,,,The performance is evaluated on different motions and multiple test data sets as well as on challenging outdoor sequences.

25 citations


Journal ArticleDOI
01 Jan 2017
TL;DR: The combination of low-cost embedded and ultrasonics hardware that forms the transducer and receiver subsystem (consisting of multiple mobile receiver nodes) together with powerful signal processing techniques yields a high-accuracy pose estimation system, which can be used as an affordable tool in various fields and applications.
Abstract: Motion capture and human body pose estimation systems have become a more common appliance nowadays because of the movie and video game industry. These measurement systems have been proven to be useful for other applications besides entertainment. One of these applications is motion analysis, which can be used for improving the form of athletes or for providing an objective validation tool for rehabilitation treatments. These analyses are done using high-accuracy measurement systems which result in high costs. Although there are some consumer products (e.g. the Microsoft Kinect) that offer movement tracking at a low cost, the accuracy does not suffice for clinical movement analysis applications. This paper therefore focuses on reducing the cost of a human body pose estimation system while retaining the required accuracy. The proposed solution comprises of an embedded ultrasonic transmitter and receiver subsystem. The receiver subsystem consists of multiple mobile nodes that are equipped with a small microphone array (at least three microphones). Each mobile receiver node captures the encoded simultaneously broadcast ultrasonic transmissions from a distributed transmitter array (which consists of at least three elements). Using signal processing, a distance can be calculated between each transmitter and microphone resulting in at least nine distances for each mobile node. Using these distances in combination with the position of the transmitters and the microphone array configuration, the XYZ-position of the mobile node and its rotation about these axes (six degrees-of-freedom) can be estimated. The combination of low-cost embedded and ultrasonics hardware that forms the transducer and receiver subsystem (consisting of multiple mobile receiver nodes) together with powerful signal processing techniques yields a high-accuracy pose estimation system, which can be used as an affordable tool in various fields and applications (e.g., gait analysis for rehabilitation purposes).

25 citations


Journal ArticleDOI
TL;DR: This paper derives the Extended Continuous Motion Model (ECMM) by clustering the trajectories into multiple categories with a K-means algorithm and fitting them respectively using Fourier series and proposes a novel motion state estimation method using Expectation-Maximization (EM) algorithm, which in result contributes to an accurate trajectory prediction.
Abstract: Motion state "Motion state of a ping-pong ball consists of the flying state and spin state." estimation and trajectory prediction of a spinning ball are two important but challenging issues for both the promotion of the next generation of robotic table tennis systems and the research on motion analysis of spinning-flying objects. Due to the Magnus force acting on the ball, the flying state "Flying state denotes the real-time translational velocity." and spin state "Spin state denotes the real-time rotational velocity." are coupled, which makes the accurate estimation of them a huge challenge. In this paper, we first derive the Extended Continuous Motion Model (ECMM) by clustering the trajectories into multiple categories with a K-means algorithm and fitting them respectively using Fourier series. The ECMM can easily adapt to all kinds of trajectories. Based on the ECMM, we propose a novel motion state estimation method using Expectation-Maximization (EM) algorithm, which in result contributes to an accurate trajectory prediction. In this method, the category in ECMM is treated as a latent variable, and the likelihood of motion state is formulated as a Gaussian Mixture Model (GMM) of the differences between the trajectory predictions and observations. The effectiveness and accuracy of the proposed method is verified by offline evaluation using a collected dataset, as well as online evaluation that the humanoid robotic table tennis system "Wu & Kong" successfully hits the high-speed spinning ball.

24 citations


Journal ArticleDOI
TL;DR: A real-time system acquiring and analyzing video sequences from soccer matches is presented and it was proved to collect accurate tracking statistics throughout different soccer matches in real- time by incorporating two human operators only.
Abstract: Computer-aided sports analysis is demanded by coaches and the media. Image processing and machine learning techniques that allow for “live” recognition and tracking of players exist. But these methods are far from collecting and analyzing event data fully autonomously. To generate accurate results, human interaction is required at different stages including system setup, calibration, supervision of classifier training, and resolution of tracking conflicts. Furthermore, the real-time constraints are challenging: in contrast to other object recognition and tracking applications, we cannot treat data collection, annotation, and learning as an offline task. A semi-automatic labeling of training data and robust learning given few examples from unbalanced classes are required. We present a real-time system acquiring and analyzing video sequences from soccer matches. It estimates each player’s position throughout the whole match in real-time. Performance measures derived from these raw data allow for an objective evaluation of physical and tactical profiles of teams and individuals. The need for precise object recognition, the restricted working environment, and the technical limitations of a mobile setup are taken into account. Our contribution is twofold: (1) the deliberate use of machine learning and pattern recognition techniques allows us to achieve high classification accuracy in varying environments. We systematically evaluate combinations of image features and learning machines in the given online scenario. Switching between classifiers depending on the amount of training data and available training time improves robustness and efficiency. (2) A proper human–machine interface decreases the number of required operators who are incorporated into the system’s learning process. Their main task reduces to the identification of players in uncertain situations. Our experiments showed high performance in the classification task achieving an average error rate of 3 % on three real-world datasets. The system was proved to collect accurate tracking statistics throughout different soccer matches in real-time by incorporating two human operators only. We finally show how the resulting data can be used instantly for consumer applications and discuss further development in the context of behavior analysis.

23 citations


Journal ArticleDOI
Xinyao Sun1, Simon Byrns1, Irene Cheng1, Bin Zheng1, Anup Basu1 
TL;DR: This work introduces a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels and demonstrates that the proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.
Abstract: We introduce a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels. The goal is to allow trainees to evaluate their performance based on a reference model shared through communication technology, e.g., the Internet, without the physical presence of an evaluating surgeon. While in the current implementation we used a Leap Motion Controller to obtain motion data for analysis, our technique can be applied to motion data captured by other smart sensors, e.g., OptiTrack. To differentiate motions captured from different participants, measurement and assessment in our approach are achieved using two strategies: (1) low level descriptive statistical analysis, and (2) Hidden Markov Model (HMM) classification. Based on our surgical knot tying task experiment, we can conclude that finger motions generated from users with different surgical dexterity, e.g., expert and novice performers, display differences in path length, number of movements and task completion time. In order to validate the discriminatory ability of HMM for classifying different movement patterns, a non-surgical task was included in our analysis. Experimental results demonstrate that our approach had 100 % accuracy in discriminating between expert and novice performances. Our proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.

22 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work introduces a kinematic chain reweighting scheme to identify and to correct misclassified pixels, and achieves rotation invariance by performing PCA on the input depth image.
Abstract: Motion analysis of infants is used for early detection of movement disorders like cerebral palsy. For the development of automated methods, capturing the infant's pose accurately is crucial. Our system for predicting 3D joint positions is based on a recently introduced pixelwise body part classifier using random ferns, to which we propose multiple enhancements. We apply a feature selection step before training random ferns to avoid the inclusion of redundant features. We introduce a kinematic chain reweighting scheme to identify and to correct misclassified pixels, and we achieve rotation invariance by performing PCA on the input depth image. The proposed methods improve pose estimation accuracy by a large margin on multiple recordings of infants. We demonstrate the suitability of the approach for motion analysis by comparing predicted knee angles to ground truth angles.

22 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This study confirms that the wearable motion capture system is useful for measuring the motion and plantar pressure data in outdoor sports such as skiing.
Abstract: Skiing is one of the most popular winter sports in the world. Even though the equipment has been improved for preventing injuries during skiing, injury risks of the lower extremity are still high. It is necessary to investigate injury risk by motion analysis during skiing. The wearable motion capture system consisting of inertial sensors and insole pressure sensors can be utilized due to restrictions of conventional optical motion capture system. In this study, the motions for short-and middle-turns during skiing were analyzed using a wearable motion capture system and the multi-scale computer simulation technology. Seven male certified ski coaches participated in this study and their full body motion and foot pressure data were simultaneously recorded by the wearable motion capture system. Joint kinematics and kinetics in the hip, knee and ankle of the right lower extremity were analyzed during short- and middle-turns. Even though a slight difference in the joint kinematics between two turns was predicted, the joint forces and moments for middle-turn were higher than those for short-turn. Because these higher joint forces and moments can result in osteoarthritis or ligament injury at the joint, injury risk of the joint for middle-turn may be higher than that for short-turn. This study confirms that the wearable motion capture system is useful for measuring the motion and plantar pressure data in outdoor sports such as skiing.

Journal ArticleDOI
TL;DR: A method for detecting the motion of human lower limbs including all degrees of freedom via the inertial sensors is proposed, which permits analyzing the patient's motion ability and its results are unbiased, as compared to therapist qualitative estimations.
Abstract: The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state.

Journal ArticleDOI
TL;DR: A novel learning-based framework to track a handheld needle by detecting microscale variations of motion dynamics over time by incorporating the neighboring pixels and mitigate the effects of the subtle tremor motion of a handheld transducer is proposed.
Abstract: This paper presents a new micro-motion-based approach to track a needle in ultrasound images captured by a handheld transducer. We propose a novel learning-based framework to track a handheld needle by detecting microscale variations of motion dynamics over time. The current state of the art on using motion analysis for needle detection uses absolute motion and hence work well only when the transducer is static. We have introduced and evaluated novel spatiotemporal and spectral features, obtained from the phase image, in a self-supervised tracking framework to improve the detection accuracy in the subsequent frames using incremental training. Our proposed tracking method involves volumetric feature selection and differential flow analysis to incorporate the neighboring pixels and mitigate the effects of the subtle tremor motion of a handheld transducer. To evaluate the detection accuracy, the method is tested on porcine tissue in-vivo, during the needle insertion in the biceps femoris muscle. Experimental results show the mean, standard deviation and root-mean-square errors of $$1.28^{\circ }$$ , $$1.09^{\circ }$$ and $$1.68^{\circ }$$ in the insertion angle, and 0.82, 1.21, 1.47 mm, in the needle tip, respectively. Compared to the appearance-based detection approaches, the proposed method is especially suitable for needles with ultrasonic characteristics that are imperceptible in the static image and to the naked eye.

Journal ArticleDOI
Heike Brock1, Yuji Ohgi1
TL;DR: Multiple inertial sensors were employed to build and evaluate a framework for the assessment of jump errors and motion style and the chosen signal-based motion features appeared to be better suited to extract and recognize style errors than the chosen kinematic induced features obtained using expensive post-processing.
Abstract: Ski jumping is an expert sport that requires fine motor skills to guarantee the safe conduct of training and competition. In this paper, we therefore employed multiple inertial sensors to build and evaluate a framework for the assessment of jump errors and motion style. First, a large set of inertial ski jump motion captures were augmented, segmented, and transformed into multiple statistic and time-serial motion feature representations. All features were next used to learn and retrieve style error information from the jump segments under two classification strategies in a cross-validation cycle. Average accuracies of the error recognition indicated the applicability of the proposed system with error recognition rates between 60% and 75%, which should be considered sufficiently good under the present size and quality of the real life training data. Furthermore, the chosen signal-based motion features appeared to be better suited to extract and recognize style errors than the chosen kinematic induced features obtained using expensive post-processing. This assumption could constitute important information for many related application systems. Therefore, it should be investigated whether such result can be generalized under different extracted features or further feature set compositions in the future.

Patent
05 Jun 2017
TL;DR: In this paper, a wearable motion sensor for collecting and transmitting motion data for use in a fall prediction model using features and parameters to classify the motion data and notify when a fall is emergent.
Abstract: A system and method of motion analysis, fall detection, and fall prediction using machine learning and classifiers. A wearable motion sensor for collecting and transmitting motion data for use in a fall prediction model using features and parameters to classify the motion data and notify when a fall is emergent. Using machine learning, the fall prediction model can be created, implemented, evaluated, and it can evolve over time with additional data. The system and method can use individual data or pool data from various individuals for use in fall prediction.

Journal ArticleDOI
TL;DR: This work proposes an Eulerian phase‐based approach which uses the phase information from the sample video to animate the static image, and demonstrates that this simple, phase-based approach for transferring small motion is more effective at animating still images than methods which rely on optical flow.
Abstract: We present a novel approach for animating static images that contain objects that move in a subtle, stochastic fashion (e.g. rippling water, swaying trees, or flickering candles). To do this, our algorithm leverages example videos of similar objects, supplied by the user. Unlike previous approaches which estimate motion fields in the example video to transfer motion into the image, a process which is brittle and produces artefacts, we propose an Eulerian phase-based approach which uses the phase information from the sample video to animate the static image. As is well known, phase variations in a signal relate naturally to the displacement of the signal via the Fourier Shift Theorem. To enable local and spatially varying motion analysis, we analyse phase changes in a complex steerable pyramid of the example video. These phase changes are then transferred to the corresponding spatial sub-bands of the input image to animate it. We demonstrate that this simple, phase-based approach for transferring small motion is more effective at animating still images than methods which rely on optical flow.

Journal ArticleDOI
TL;DR: This paper introduces a model selection method based on a Bayesian approach that balances the models fitness and complexity and presents an unsupervised clustering methods, which achieved the best F1 score.

Book
11 May 2017
TL;DR: These techniques are shown to effectively address the above challenges by bridging the gap between symbolic cognitive functions and numerical sensing & control tasks in intelligent systems.
Abstract: This book introduces readers to the latest exciting advances in human motion sensing and recognition, from the theoretical development of fuzzy approaches to their applications. The topics covered include human motion recognition in 2D and 3D, hand motion analysis with contact sensors, and vision-based view-invariant motion recognition, especially from the perspective of Fuzzy Qualitative techniques. With the rapid development of technologies in microelectronics, computers, networks, and robotics over the last decade, increasing attention has been focused on human motion sensing and recognition in many emerging and active disciplines where human motions need to be automatically tracked, analyzed or understood, such as smart surveillance, intelligent human-computer interaction, robot motion learning, and interactive gaming. Current challenges mainly stem from the dynamic environment, data multi-modality, uncertain sensory information, and real-time issues. These techniques are shown to effectively address the above challenges by bridging the gap between symbolic cognitive functions and numerical sensing & control tasks in intelligent systems. The book not only serves as a valuable reference source for researchers and professionals in the fields of computer vision and robotics, but will also benefit practitioners and graduates/postgraduates seeking advanced information on fuzzy techniques and their applications in motion analysis.

Proceedings ArticleDOI
11 Sep 2017
TL;DR: This work investigated neural networks for motion performance evaluation utilizing a set of inertial sensor-based ski jump measurements and found that one multi-dimensional convolutional layer is sufficient to recognize relevant performance error representations.
Abstract: Advanced machine learning technologies are seldom applied to wearable motion sensor data obtained from sport movements. In this work, we therefore investigated neural networks for motion performance evaluation utilizing a set of inertial sensor-based ski jump measurements. A multi-dimensional convolutional network model that related the motion data under aspects of time, placement and sensor type was implemented. Additionally, its applicability as a measure for automatic motion style judging was evaluated. Results indicate that one multi-dimensional convolutional layer is sufficient to recognize relevant performance error representations. Furthermore, comparisons against a Support Vector Machine and a Hidden Markov Model show that the new model out-performs feature-based methods under noisy and biased data environments. Architectures such as the proposed evaluation system can hence become essential for automatic performance analysis and style judging systems in future.

Journal ArticleDOI
TL;DR: The potential of two ASCs (GoPro Hero3+) for in-air and underwater three-dimensional motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view is evaluated.

Patent
24 Oct 2017
TL;DR: In this paper, an optical flow estimation method based on multiple dimensioned corresponding structuring learning is proposed, under given successive video frames, the method can parse the motion conditions of the first frame relative to the second frame, and can provide better effect and robustness under various complex conditions.
Abstract: The invention discloses an optical flow estimation method based on multiple dimensioned corresponding structuring learning; under given successive video frames, the method can parse the motion conditions of the first frame relative to the second frame; the method specifically comprises the following steps: obtaining a successive frame image data set used for training optical flow estimation, and defining an algorithm object; carrying out structuring modeling for correspondences between two successive frame images in different dimensions; joint encoding the corresponding relations in different dimensions; building an optical flow estimation prediction model; using the prediction model to estimate the successive video frame optical flow values. The optical flow estimation method is applied to the optical flow motion analysis in true videos, and can provide better effect and robustness under various complex conditions.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: The next-generation of IoE/IoT and ubiquitous systems will benefit from presented algorithms integrated in smart homes, post-surgery/rehabilitation environments, elderly activity/safety monitoring, sport umpiring and other motion analysis contexts and environments where video privacy is considered paramount.
Abstract: This paper presents a solution for filtering personally identifiable information for developments of augmented video coaching systems and exergames applied on a golf case study Aligned with eWaste reduction, this study included multiple low-cost video sources including new and obsolete video cameras, mobile devices, and Microsoft Kinect to capture and convert motion data as a silhouette-based video The algorithm development combined computer vision techniques with design science The silhouette-based video produced was sufficient for a coach to provide analysis of static and dynamic critical features of golf swings (eg stance, tempo and swing plane) In looking beyond eSport systems, the achieved outcomes (robustness to colours, contrast, diverse video resolutions and lightning intensity changes) and insights are universally applicable to enable human-or AI-based motion activity analytics The next-generation of IoE/IoT and ubiquitous systems will benefit from presented algorithms integrated in smart homes, post-surgery/rehabilitation environments, elderly activity/safety monitoring, sport umpiring and other motion analysis contexts and environments where video privacy is considered paramount

Journal ArticleDOI
TL;DR: A general framework is proposed which is applied in automatic estimation of human activities and behaviors, by exploring dependencies among different levels of feature spaces of the body movement.
Abstract: Human body motion analysis is an initial procedure for understanding and perceiving human activities. A multilevel approach is proposed here for automatic human activity and social role identification. Different topics contribute to the development of the proposed approach, such as feature extraction, body motion description, and probabilistic modeling, all combined in a multilevel framework. The approach uses 3-D data extracted from a motion capture device. A Bayesian network technique is used to implement the framework. A mid-level body motion descriptor, using the Laban movement analysis system, is the core of the proposed framework. The mid-level descriptor links low-level features to higher levels of human activities, by providing a set of proper human motion-based features. This paper proposes a general framework which is applied in automatic estimation of human activities and behaviors, by exploring dependencies among different levels of feature spaces of the body movement.

Journal ArticleDOI
TL;DR: “Target-wise” evaluation recall rate method is proposed which statistic the object entirety but not pixels, and the error detection rate is low and could be suitable for advanced application and motion analysis in satellite video.
Abstract: In view of the problem of satellite video motion detection, a background subtraction method of combining global motion compensation and local dynamic updating is proposed. In the first instance, the improved ViBE model method is used to establish the background model in the middle frame. The background model has one more dynamic update factor. Secondly, the motion model of global scene between frames is estimated by using uniform blocked forward-back LK optical flow, and the global motion compensation is performed. Last but not least, comparison between compensated frame and model, and connected domain analysis are employed to detect and segment the motion objects. Even more, we can correct the update factor of model according to the “pseudo motion” judgment. And then, the model would be updated locally and adaptively. “Target-wise” evaluation recall rate method is proposed which statistic the object entirety but not pixels. We do four experiments using Skysat and JL1H video. The results show that the proposed method perform a favorable effect on “Target-wise” recall rate and the error detection rate is low. The “Target-wise” recall rate is better than 80%. The error detection rate is reduced by at least 10 times, and even more than 160 times, compared with the classical method. The method could be suitable for advanced application and motion analysis in satellite video

28 Aug 2017
TL;DR: This article presents a threefold approach for model calibration that can be easily deployed in any biomechanical lab equipped with classical motion analysis facilities and focuses on the idea of using such methods as a tool in any motion analysis lab.
Abstract: Subject-specific musculoskeletal models are mandatory to conduct efficient analyses of muscle and joint forces involved in human motion Thus, proper model calibration at geometrical, inertial, and muscular levels is critical This article present a threefold approach for model calibration that can be easily deployed in any biomechanical lab equipped with classical motion analysis facilities First, motion capture data is used to calibrate geometrical parameters of the model (bones lengths, joint centers, and joint orientations) The calibration minimizes the distance between real and reconstructed trajectories of markers Second, motion capture and force platforms data are used to calibrate inertial parameters of the model The calibration minimizes the residual forces arising from the model inertial inaccuracies in the dynamics of the system Last, isokinetic ergometer data are used to calibrate muscular parameters The calibration minimizes the distance between the experimental maximal isometric torque curve and the simulated one for a given joint Examples are provided throughout the paper and results are discussed A focus is made on the idea of using such methods as a tool in any motion analysis lab

Patent
10 May 2017
TL;DR: In this article, an intelligent physique testing platform is proposed for testing push-up and sit-up events, a human body data camera shooting unit is used for obtaining an image of a tester standing on a human-body data testing platform, the image is transmitted to ahuman body data analysis system, a central processing unit performs analysis processing on the collected height value, weight value, shoulder breadth value and chest thickness value, a group of placement location range of a left hand, a right hand, left foot, a left foot and a right foot, and a standard push
Abstract: The invention discloses an intelligent physique testing platform. The platform is mainly used for testing push-up and sit-up events, a human body data camera shooting unit is used for obtaining an image of a tester standing on a human body data testing platform, the image is transmitted to a human body data analysis system, a central processing unit performs analysis processing on the collected height value, weight value, shoulder breadth value and chest thickness value, a group of placement location range of a left hand, a right hand, a left foot and a right foot and a standard push-up body movement amplitude range are obtained, and a group of placement location range of a left buttock, a right buttock, a left foot and a right foot in a sit-up mode and a standard sit-up body movement amplitude range are obtained; a motion analysis unit obtains image information from a motion capture unit, and if the body movement amplitude value is within the body movement amplitude range, an indicator light shows green. The intelligent physique testing platform has the advantages of being intelligent, accurate and efficient.

Journal ArticleDOI
27 Aug 2017-Science
TL;DR: This study investigates a novel three-dimension gait recognition approach based on skeleton representation of motion by the cheap consumer level camera Kinect sensor using the spatio-temporal variations in relative angles among various skeletal joints and changing of measured distance between limbs and land.
Abstract: This study investigates a novel three-dimension gait recognition approach based on skeleton representation of motion by the cheap consumer level camera Kinect sensor. In this work, a new exemplification of human gait signature is proposed using the spatio-temporal variations in relative angles among various skeletal joints and changing of measured distance between limbs and land. These measurements are computed during one gait cycle. Further, we have created our own dataset based on Kinect sensor and extract two sets of dynamic features. Nearest Neighbors and Linear Discriminant Classifier (LDC) are used for classification. The results of the experiments show the proposed approach as an effective and human gait recognizer in comparison with current Kinect-based gait recognition methods.

Journal ArticleDOI
TL;DR: The method opens new vistas to robust user-interaction free estimation of full body kinematics, a prerequisite to motion analysis, and achieves a score of 91.1 and 69.1 on mean Probability of Correct Keypoint measure and Average Precision of Keypoints measure for the frontal and sagittal datasets respectively.

Journal ArticleDOI
TL;DR: A measurement system for arm motion analysis using a 3D image sensor is developed and the method of upper body posture estimation based on a steady-state genetic algorithm (SSGA) is proposed.

Journal ArticleDOI
TL;DR: In this article, a robotic leg has been designed, constructed and controlled from a geometric of human leg model with three joints moving in 2D plane, the robotic leg was able to create the movement from motion capture.
Abstract: In this paper presented the prototype of robotic leg has been designed, constructed and controlled. These prototype are designed from a geometric of human leg model with three joints moving in 2D plane. Robot has three degree of freedom using DC servo motor as a joint actuators: hip, knee and ankle. The mechanical leg constructed using aluminum alloy and acrylic material. The control movement of this system is based on motion capture data stored on a personal computer. The motions are recorded with a camera by use of a marker-based to track movement of human leg. Propose of this paper is design of robotic leg to present the analysis of motion of the human leg swing and to testing the system ability to create the movement from motion capture. The results of this study show that the design of robotic leg was capable for practical use of the human leg motion analysis. The accuracy of orientation angles of joints shows the average error on hip is 1.46o, knee is 1.66o, and ankle is 0.46o. In this research suggesting that the construction of mechanic is an important role in the stabilization of the movement sequence.