scispace - formally typeset
Search or ask a question

Showing papers on "Motion analysis published in 2013"


Journal ArticleDOI
19 Jul 2013-Sensors
TL;DR: This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations, and results were compared with those from a camera-based motion analysis system.
Abstract: This paper proposes a method for three dimensional gait analysis using wearable sensors and quaternion calculations. Seven sensor units consisting of a tri-axial acceleration and gyro sensors, were fixed to the lower limbs. The acceleration and angular velocity data of each sensor unit were measured during level walking. The initial orientations of the sensor units were estimated using acceleration data during upright standing position and the angular displacements were estimated afterwards using angular velocity data during gait. Here, an algorithm based on quaternion calculation was implemented for orientation estimation of the sensor units. The orientations of the sensor units were converted to the orientations of the body segments by a rotation matrix obtained from a calibration trial. Body segment orientations were then used for constructing a three dimensional wire frame animation of the volunteers during the gait. Gait analysis was conducted on five volunteers, and results were compared with those from a camera-based motion analysis system. Comparisons were made for the joint trajectory in the horizontal and sagittal plane. The average RMSE and correlation coefficient (CC) were 10.14 deg and 0.98, 7.88 deg and 0.97, 9.75 deg and 0.78 for the hip, knee and ankle flexion angles, respectively.

141 citations


Book ChapterDOI
05 Mar 2013

73 citations


Journal ArticleDOI
TL;DR: In this paper, the performance of an RGB-D sensor is evaluated in terms of the 3D positions of the body parts tracked by the sensor, 3D rotation angles at joints, and the impact of the RGB-d sensor's accuracy on motion analysis.
Abstract: For construction management, data collection is a critical process for gathering and measuring information for the evaluation and control of ongoing project performances. Taking into account that construction involves a significant amount of manual work, worker monitoring can play a key role in analyzing operations and improving productivity and safety. However, time-consuming tasks involved in field observation have brought up the issue of implementing worker observation in daily management practice. In an effort to address the issue, this paper investigates the performances of a cost-effective and portable RGB-D sensor, based on recent research efforts extended from our previous study. The performance of an RGB-D sensor is evaluated in terms of (1) the 3D positions of the body parts tracked by the sensor, (2) the 3D rotation angles at joints, and (3) the impact of the RGB-D sensor’s accuracy on motion analysis. For the assessment, experimental studies were undertaken to collect motion capture datasets using an RGB-D sensor and a marker-based motion capture system, VICON, and to analyze errors as compared with the VICON used as the ground truth. As a test case, 25 trials of ascending and descending during ladder climbing were recorded simultaneously with both systems, and the resulting motion capture datasets (i.e., 3D skeleton models) were temporally and spatially synchronized for their comparison. Through the comparative assessment, we found a discrepancy of 10.7 cm in the tracked locations of body parts, and a difference of 16.2 degrees in rotation angles. However, motion detection results show that the inaccuracy of an RGB-D sensor does not have a considerable effect on action recognition in the experiment. This paper thus provides insight into the accuracy of an RGB-D sensor on motion capture in various measures and directions of further research for the improvement of accuracy.

73 citations


Journal ArticleDOI
TL;DR: A three-phase gait recognition method that analyses the spatio-temporal shape and dynamic motion characteristics of a human subject's silhouettes to identify the subject in the presence of most of the challenging factors that affect existing gait recognized systems is presented.

58 citations


Proceedings ArticleDOI
02 Sep 2013
TL;DR: A human body part motion analysis based approach is proposed for depression analysis using facial dynamics and vocal prosody to explore the significance of body pose and motion in analysing the psychological state of a person.
Abstract: In this paper, a human body part motion analysis based approach is proposed for depression analysis. Depression is a serious psychological disorder. The absence of an (automated) objective diagnostic aid for depression leads to a range of subjective biases in initial diagnosis and ongoing monitoring. Researchers in the affective computing community have approached the depression detection problem using facial dynamics and vocal prosody. Recent works in affective computing have shown the significance of body pose and motion in analysing the psychological state of a person. Inspired by these works, we explore a body parts motion based approach. Relative orientation and radius are computed for the body parts detected using the pictorial structures framework. A histogram of relative parts motion is computed. To analyse the motion on a holistic level, space-time interest points are computed and a bag of words framework is learnt. The two histograms are fused and a support vector machine classifier is trained. The experiments conducted on a clinical database, prove the effectiveness of the proposed method.

52 citations


Journal ArticleDOI
TL;DR: Experimental evaluation with different users and environmental variations under real-world driving shows the promise of applying the proposed systems for both postanalysis of captured driving data as well as for real-time driver assistance.
Abstract: We focus on vision-based hand activity analysis in the vehicular domain. The study is motivated by the overarching goal of understanding driver behavior, in particular as it relates to attentiveness and risk. First, the unique advantages and challenges for a nonintrusive, vision-based solution are reviewed. Next, two approaches for hand activity analysis, one relying on static (appearance only) cues and another on dynamic (motion) cues, are compared. The motion-cue-based hand detection uses temporally accumulated edges in order to maintain the most reliable and relevant motion information. The accumulated image is fitted with ellipses in order to produce the location of the hands. The method is used to identify three hand activity classes: (1) two hands on the wheel, (2) hand on the instrument panel, (3) hand on the gear shift. The static-cue-based method extracts features in each frame in order to learn a hand presence model for each of the three regions. A second-stage classifier (linear support vector machine) produces the final activity classification. Experimental evaluation with different users and environmental variations under real-world driving shows the promise of applying the proposed systems for both postanalysis of captured driving data as well as for real-time driver assistance.

49 citations


Journal ArticleDOI
TL;DR: This paper introduces a method to track the spatial location and movement of a human using wearable inertia sensors without additional external global positioning devices, which can be worn on the human at any time and any place and has no restriction to indoor and outdoor applications.
Abstract: This paper introduces a method to track the spatial location and movement of a human using wearable inertia sensors without additional external global positioning devices. Starting from the lower limb kinematics of a human, the method uses multiple wearable inertia sensors to determine the orientation of the body segments and lower limb joint motions. At the same time, based on human kinematics and locomotion phase detection, the spatial position and the trajectory of a reference point on the body can be determined. An experimental study has shown that the position error can be controlled within 1-2% of the total distance in both indoor and outdoor environments. The system is capable of localization on irregular terrains (like uphill/downhill). From the localization results, the ground shape and the height information that can be recovered after localization experiments are conducted. A benchmark study on the accuracy of this method was carried out using the camera-based motion analysis system to study the validity of the system. The localization data that are obtained from the proposed method match well with those from the commercial system. Since the sensors can be worn on the human at any time and any place, this method has no restriction to indoor and outdoor applications.

49 citations


01 Jan 2013
TL;DR: Wagholi et al. as mentioned in this paper presented a new algorithm for detecting moving objects from a static background scene to detect moving object based on background subtraction and morphological filtering.
Abstract: * ME (Electronics and Telecommunication), GHRCEM Wagholi, Pune ** Electronics and Telecommunication, GHRCEM Wagholi, Pune Abstract- Recent research in computer vision has increasingly focused on building systems for observing humans and understanding their look, activities, and behavior providing advanced interfaces for interacting with humans, and creating sensible models of humans for various purposes In order for any of these systems to function, they require methods for detecting people from a given input image or a video Visual analysis of human motion is currently one of the most active research topics in computer vision In which the moving human body detection is the most important part of the human body motion analysis, the purpose is to detect the moving human body from the background image in video sequences, and for the follow-up treatment such as the target classification, the human body tracking and behavior understanding, its effective detection plays a very important role Human motion analysis concerns the detection, tracking and recognition of people behaviors, from image sequences involving humansAccording to the result of moving object detection research on video sequences This paper presents a new algorithm for detecting moving objects from a static background scene to detect moving object based on background subtraction We set up a reliable background updating model based on statistical After that, morphological filtering is initiated to remove the noise and solve the background interruption difficulty At last, contour projection analysis is combined with the shape analysis to remove the effect of shadow; the moving human bodies are accurately and reliably detected The experiment results show that the proposed method runs rapidly, exactly and fits for the concurrent detection

48 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: A new video database that can be used for an objective comparison and evaluation of different motion analysis and classification methods is presented, which contains video clips that capture the 3D motion of individuals.
Abstract: The detection and classification of human movements, as a joint field of Computer Vision and Pattern Recognition, is used with an increasing rate in applications designed to describe human activity. Such applications require efficient methods and tools for the automatic analysis and classification of motion capture data, which constitute an active field of research. To facilitate the development and the benchmarking of methods for action recognition, several video collections have previously been proposed. In this paper, we present a new video database that can be used for an objective comparison and evaluation of different motion analysis and classification methods. The database contains video clips that capture the 3D motion of individuals. To be more specific, the set consists of 8374 video clips, which contain 12 different types of tennis actions performed by 55 individuals, captured by Kinect. Kinect provides the depth map of motion data and helps to extract the 3D skeletal joint connections. Performing experiments using state of the art algorithms, the database shows to be very challenging. It contains very similar to each other actions, offering the opportunity to algorithms dedicated to gaming and athletics, to be developed and tested. The database is freely available for research purposes.

41 citations


Proceedings ArticleDOI
30 Sep 2013
TL;DR: The calculation of the flexion-extension knee angle from segment acceleration and angular rates measured using body-worn inertial sensors using a functional calibration procedure to allow the analysis of joint angles during dynamic sports movements.
Abstract: Motion analysis has become an important tool for athletes to improve their performance. However, most motion analysis systems are expensive and can only be used in a laboratory environment. Ambulatory motion analysis systems using inertial sensors would allow more flexible use, e.g. in a real training environment or even during competitions.This paper presents the calculation of the flexion-extension knee angle from segment acceleration and angular rates measured using body-worn inertial sensors. Using a functional calibration procedure, the sensors are first aligned without the need of an external camera system. An extended Kalman filter is used to estimate the relative orientations of thigh and shank, from which the knee angle is calculated.The algorithm was validated by comparing the computed knee angle to the output of a reference camera motion tracking system. In total seven subjects performed five dynamic motions: walking, jogging, running, jumps and squats. The averaged root mean squared error of the estimated knee angle was 8.2° ± 2.4° over all motions, with an average Pearson-correlation of 0.971 ± 0.020.In the future this will allow the analysis of joint angles during dynamic sports movements.

41 citations


Journal ArticleDOI
30 Oct 2013-PLOS ONE
TL;DR: A conceptually new gap filling algorithm is proposed and results from a proof-of-principle analysis supported the assumption that missing marker information can be reconstructed from the intercorrelations between marker coordinates, provided that sufficient data with complete marker information is available.
Abstract: Marker-based human motion analysis is an important tool in clinical research and in many practical applications. Missing marker information caused by occlusions or a marker falling off is a common problem impairing data quality. The current paper proposes a conceptually new gap filling algorithm and presents results from a proof-of-principle analysis. The underlying idea of the proposed algorithm was that a multitude of internal and external constraints govern human motion and lead to a highly subject-specific movement pattern in which all motion variables are intercorrelated in a specific way. Two principal component analyses were used to determine how the coordinates of a marker with gaps correlated with the coordinates of the other, gap-free markers. Missing marker data could then be reconstructed through a series of coordinate transformations. The proposed algorithm was tested by reconstructing artificially created gaps in a 20-step walking trial and in an 18-s one-leg balance trial. The measurement accuracy’s dependence on the marker position, the length of the gap, and other parameters were evaluated. Even if only 2 steps of walking or 1.8 s of postural sway (10% of the whole marker data) were provided as input in the current study, the reconstructed marker trajectory differed on average no more than 11 mm from the originally measured trajectory. The reconstructed result improved further, on average, to distances below 5 mm if the marker trajectory was available more than 50% of the trial. The results of this proof-of-principle analysis supported the assumption that missing marker information can be reconstructed from the intercorrelations between marker coordinates, provided that sufficient data with complete marker information is available. Estimating missing information cannot be avoided entirely in many situations in human motion analysis. For some of these situations, the proposed reconstruction method may provide a better solution than what is currently available.

Proceedings ArticleDOI
03 Oct 2013
TL;DR: This work introduces a novel method of knee joint measurement based on the contactless magnetic absolute encoder AS5040 (Austria Microsystems®, USA), and the microprocessor type PIC16F877A, which has a resolution of 0.35 degrees, quite enough for an orthosis that requires knee position measurements.
Abstract: Angular measurements of knee joint are a key factor in orthoses design, actually is very important for human gait analysis in order to obtain a deep knowledge of the musculoskeletal system performance. Video-based motion analysis systems have been used as measurement tools to quantify the angle and movement of knee joint. Though this technique is good, it requires special equipment and high computational cost. This work introduces a novel method of knee joint measurement. The design is mainly based on the contactless magnetic absolute encoder AS5040 (Austria Microsystems®, USA), and the microprocessor type PIC16F877A. The developed device has a resolution of 0.35 degrees, quite enough for an orthosis that requires knee position measurements. The reliability of the system was evaluated by making: 1) Static measurements by using a mechanical goniometer as reference, in which a correlation up to 0.999 was obtained. 2) Dynamical measurements by performing flexion-extension movements by a healthy subject and comparing the records with a commercial motion analysis system (APAS®, USA). A correlation of 0.9943 was obtained. Besides, trials were performed by three subjects under natural gait.

Proceedings ArticleDOI
01 Nov 2013
TL;DR: This paper implemented the method for the analysis of collected grasping motion in the PCA+fPCA space, which generated a new data-driven taxonomy of the grasp types, and naturally clustered grasping motion into 5 consistent groups across 5 different subjects.
Abstract: This paper presents a novel grasping motion analysis technique based on functional principal component analysis (fPCA). The functional analysis of grasping motion provides an effective representation of grasping motion and emphasizes motion dynamic features that are omitted by classic PCA-based approaches. The proposed approach represents, processes, and compares grasping motion trajectories in a low-dimensional space. An experiment was conducted to record grasping motion trajectories of 15 different grasp types in Cutkosky grasp taxonomy. We implemented our method for the analysis of collected grasping motion in the PCA+fPCA space, which generated a new data-driven taxonomy of the grasp types, and naturally clustered grasping motion into 5 consistent groups across 5 different subjects. The robustness of the grouping was evaluated and confirmed using a tenfold cross validation approach.

Journal ArticleDOI
TL;DR: A novel representation of octopus arm movements is described in which a movement is characterized by a pair of surfaces that represent the curvature and torsion values of points along the arm as a function of time.
Abstract: The octopus arm is a muscular hydrostat and due to its deformable and highly flexible structure it is capable of a rich repertoire of motor behaviors. Its motor control system uses planning principles and control strategies unique to muscular hydrostats. We previously reconstructed a data set of octopus arm movements from records of natural movements using a sequence of 3D curves describing the virtual backbone of arm configurations. Here we describe a novel representation of octopus arm movements in which a movement is characterized by a pair of surfaces that represent the curvature and torsion values of points along the arm as a function of time. This representation allowed us to explore whether the movements are built up of elementary kinematic units by decomposing each surface into a weighted combination of 2D Gaussian functions. The resulting Gaussian functions can be considered as motion primitives at the kinematic level of octopus arm movements. These can be used to examine underlying principles of movement generation. Here we used combination of such kinematic primitives to decompose different octopus arm movements and characterize several movement prototypes according to their composition. The representation and methodology can be applied to the movement of any organ which can be modeled by means of a continuous 3D curve.

Journal Article
TL;DR: The newly developed feature-tracking algorithm has shown a good automatic tracking effectiveness in underwater motion analysis with significantly smaller percentage of required manual interventions when compared to a commercial software.
Abstract: Tracking of markers placed on anatomical landmarks is a common practice in sports science to perform the kinematic analysis that interests both athletes and coaches. Although different software programs have been developed to automatically track markers and/or features, none of them was specifically designed to analyze underwater motion. Hence, this study aimed to evaluate the effectiveness of a software developed for automatic tracking of underwater movements (DVP), based on the Kanade-Lucas-Tomasi feature tracker. Twenty-one video recordings of different aquatic exercises (n = 2940 markers' positions) were manually tracked to determine the markers' center coordinates. Then, the videos were automatically tracked using DVP and a commercially available software (COM). Since tracking techniques may produce false targets, an operator was instructed to stop the automatic procedure and to correct the position of the cursor when the distance between the calculated marker's coordinate and the reference one was higher than 4 pixels. The proportion of manual interventions required by the software was used as a measure of the degree of automation. Overall, manual interventions were 10.4% lower for DVP (7.4%) than for COM (17.8%). Moreover, when examining the different exercise modes separately, the percentage of manual interventions was 5.6% to 29.3% lower for DVP than for COM. Similar results were observed when analyzing the type of marker rather than the type of exercise, with 9.9% less manual interventions for DVP than for COM. In conclusion, based on these results, the developed automatic tracking software presented can be used as a valid and useful tool for underwater motion analysis. Key PointsThe availability of effective software for automatic tracking would represent a significant advance for the practical use of kinematic analysis in swimming and other aquatic sports.An important feature of automatic tracking software is to require limited human interventions and supervision, thus allowing short processing time.When tracking underwater movements, the degree of automation of the tracking procedure is influenced by the capability of the algorithm to overcome difficulties linked to the small target size, the low image quality and the presence of background clutters.The newly developed feature-tracking algorithm has shown a good automatic tracking effectiveness in underwater motion analysis with significantly smaller percentage of required manual interventions when compared to a commercial software.

Journal ArticleDOI
TL;DR: A model-based approach (MBA) for human motion data reconstruction by a scalable registration method for combining joint physiological kinematics with limb segment poses and results show that model- based MBS and MLS methods lead to physiologically-acceptable human kinemics.

Proceedings ArticleDOI
15 Jul 2013
TL;DR: This paper proposes a novel abnormal event detection method via likelihood estimation of dynamic-texture motion representation, called Structural Multi-scale Motion Interrelated Patterns (SMMIP), which combines both original motion patterns and their structural spatio-temporal information, which effectively represents localized events by different resolutions of motion patterns.
Abstract: Detecting abnormal events in crowded scenes remains challenging due to the diversity of events defined by various applications. Among the many application situations, motion analysis for event representation is suited for crowded scenes. In this paper, we propose a novel abnormal event detection method via likelihood estimation of dynamic-texture motion representation, called Structural Multi-scale Motion Interrelated Patterns (SMMIP). SMMIP combines both original motion patterns and their structural spatio-temporal information, which effectively represents localized events by different resolutions of motion patterns. To model normal events, the Gaussian mixture model is trained with the observed normal events, then the likelihood estimation for testing events is computed to judge whether they are abnormal. Meanwhile, the proposed model can be learned online by updating the parameters incrementally. The proposed approach is evaluated on several publicly available datasets and outperforms several other methods proposed before, which is shown that the structural spatio-temporal information added in motion representation helps increasing the anomalies detection rate.

Patent
Daniel Kennett1, Jonathan Hoof1
14 Mar 2013
TL;DR: In this article, the authors present systems and methods for a runtime engine for analyzing user motion in a 3D image, which is able to use different techniques to analyze the user's motion, depending on what the motion is.
Abstract: Disclosed herein are systems and methods for a runtime engine for analyzing user motion in a 3D image. The runtime engine is able to use different techniques to analyze the user's motion, depending on what the motion is. The runtime engine might choose a technique that depends on skeletal tracking data and/or one that instead uses image segmentation data to determine whether the user is performing the correct motion. The runtime engine might determine how to perform positional analysis or time/motion analysis of the user's performance based on what motion is being performed.

Journal ArticleDOI
TL;DR: An approach that automatically detects moving ships in port surveillance videos with robustness for occlusions is presented and the obtained results outperform two recent algorithms and show the accuracy and robustness of the proposed ship detection approach.
Abstract: In port surveillance, video-based monitoring is a valuable supplement to a radar system by helping to detect smaller ships in the shadow of a larger ship and with the possibility to detect nonmetal ships. Therefore, automatic video-based ship detection is an important research area for security control in port regions. An approach that automatically detects moving ships in port surveillance videos with robustness for occlusions is presented. In our approach, important elements from the visual, spatial, and temporal features of the scene are used to create a model of the contextual information and perform a motion saliency analysis. We model the context of the scene by first segmenting the video frame and contextually labeling the segments, such as water, vegetation, etc. Then, based on the assumption that each object has its own motion, labeled segments are merged into individual semantic regions even when occlusions occur. The context is finally modeled to help locating the candidate ships by exploring semantic relations between ships and context, spatial adjacency and size constraints of different regions. Additionally, we assume that the ship moves with a significant speed compared to its surroundings. As a result, ships are detected by checking motion saliency for candidate ships according to the predefined criteria. We compare this approach with the conventional technique for object classification based on support vector machine. Experiments are carried out with real-life surveillance videos, where the obtained results outperform two recent algorithms and show the accuracy and robustness of the proposed ship detection approach. The inherent simplicity of our algorithmic subsystems enables real-time operation of our proposal in embedded video surveillance, such as port surveillance systems based on moving, nonstatic cameras.

Journal ArticleDOI
TL;DR: This paper addresses the problem of multiplicative speckle noise for motion estimation techniques that are based on optical flow methods and proves that the influence of this noise leads to wrong correspondences between image regions if not taken into account.
Abstract: Motion estimation on ultrasound data is often referred to as `Speckle Tracking' in clinical environments and plays an important role in diagnosis and monitoring of cardiovascular diseases and the identification of abnormal cardiac motion. The impact of physical effects in the process of data acquisition raises many problems for conventional image processing techniques. The most significant difference to other medical data is its high level of speckle noise, which has completely different characteristics from other noise models, e.g., additive Gaussian noise. In this paper we address the problem of multiplicative speckle noise for motion estimation techniques that are based on optical flow methods and prove that the influence of this noise leads to wrong correspondences between image regions if not taken into account. To overcome these problems we propose the use of local statistics and introduce an optical flow method which uses histograms as discrete representations of local statistics for motion analysis. We show that this approach is more robust under the presence of speckle noise than classical optical flow methods.

Patent
12 Dec 2013
TL;DR: In this article, a system was proposed to analyze obtained video information of patient motion during a period of time to track one or more anatomical regions through a plurality of frames of the video information.
Abstract: Devices, systems, and techniques for analyzing video information to objectively identify patient behavior are disclosed. A system may analyze obtained video information of patient motion during a period of time to track one or more anatomical regions through a plurality of frames of the video information and calculate one or more movement parameters of the one or more anatomical regions. The system may also compare the one or more movement parameters to respective criteria for each of a plurality of predetermined patient behaviors and identify the patient behaviors that occurred during the period of time. In some examples, a device may control therapy delivery according to the identified patient behaviors and/or sensed parameters previously calibrated based on the identified patient behaviors.

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A novel method for motion segmentation in crowded scenes, based on statistical modeling for structured prediction using a Conditional Random Field (CRF), which overcomes the label bias problem, making it suitable for crowd motion analysis.
Abstract: In this paper we present a novel method for motion segmentation in crowded scenes, based on statistical modeling for structured prediction using a Conditional Random Field (CRF). As opposed to other conditional Markov models, CRF overcomes the label bias problem, making it suitable for crowd motion analysis. In our method, a grid of particles is initialized on the scene, and advected using optical flow. The particles are exploited to extract motion patterns, used as input priors for CRF training. Furthermore, we exploit min cut/max flow algorithm to remove the residual noise and highlight the main directions of crowd motion. The experimental evaluation is conducted on a set of benchmark video sequences, commonly used for crowd motion analysis, and the obtained results are compared against other state of the art techniques.

Journal Article
TL;DR: The purpose of the study is to show an ideal kinematics appearance of human gait cycle for walking in order to get measurement values that can be depended on in the hospitals of rehabilitation, the centers of physical therapy and the clinical of medical sports as a reference data for kinematic joint parameter.
Abstract: Kinematic system is used in gait analysis to record the position and orientation of the body segments, the angles of the joints and the corresponding linear and angular velocities and acceleration. Gait analysis is used for two very different purposes to aid directly in the treatment of individual patients and to improve our understanding of gait through research. The purpose of the study is to show an ideal kinematics appearance of human gait cycle for walking in order to get measurement values that can be depended on in the hospitals of rehabilitation, the centers of physical therapy and the clinical of medical sports as a reference data for kinematic joint parameter. In this study, 20 subjects and one abnormal subject (undergoes foot flat) were selected from the society; the 20 subjects were not to have any pathology that would affect gait and had to be unfamiliar with treadmill walking, then a video recording was made for them by using a single digital video camera recorder fitted on a stand of three legs in a sagittal plane while subjects walked on a motorized treadmill one by one, the treadmill is often used in rehabilitation programs because it allows standard and controlled conditions and it needs small space. Then by special motion analysis software (Dartfish) was used to study the knee and hip joint kinematics and the spatial –temporal gait parameters (step length, stride length, stride duration, cadence) from the video recording. Results obtained from the Dartfish program are important in understanding that the knee and hip angles differ in each gait cycle, similarly to spatial- temporal parameters, the spatial- temporal parameters differ in each gait cycle analyzed for subjects.

Journal ArticleDOI
TL;DR: Taylor et al. as mentioned in this paper developed a computer model of the golf swing capable of calculating a diverse range of 3D kinematics and kinetics values based on motion analysis data collected in the laboratory.
Abstract: The golf swing is a complex, multi-planar, three-dimensional (3D) motion sequence performed at very high speeds. These properties make biomechanical analysis of the golf swing difficult. Hence, the aim of this study was to develop a computer model of the golf swing capable of calculating a diverse range of 3D kinematics and kinetics values based on motion analysis data collected in the laboratory. Five golfers performed six swings in the field of view of eight Falcon High Speed Resolution cameras (240 Hz), which captured the movements of 56 markers placed on the golfers and their clubs, resulting in marker trajectories that were processed into linear xyz-coordinates using the Eva Motion Analysis system. To perform the kinematics and kinetics calculations, a 20-segment rigid body model of the human body was designed in the Mechanical Systems Pack, connecting the segments by a selection of linear and spherical constraints, resulting in a system of segments with 58 degrees of freedom, with the constraint equations of motion calculated by the Newton-Lagrangian iteration method. The model allowed for the derivation of segmental sequencing, separation angles, segmental planes of motion, segmental velocity contributions, joint torques and muscle powers. The preliminary data suggest that such an integrated kinematics and kinetics analysis is necessary to understand the mechanical complexity of golf swing. Even with the small sample size analysed in this study, some interesting trends were found, such as certain violations of the classical proximal-to-distal sequencing scheme, differing swing plane and club head trajectories in the backswing and downswing phases, minimal hip angular velocity contribution to the ball at impact, concentric and eccentric muscle powers in the downswing phase, and increased lumbar loading factors from the mid-downswing phase to ball impact. © 2014 Taylor & Francis.

Book ChapterDOI
01 Nov 2013
TL;DR: A new Model-Based Approach (MBA) that has been specially developed for KinectTM input based on previous validated anatomical and biomechanical studies performed by the authors allows real 3D motion analysis of complex movements respecting conventions expected in biomechanics and clinical motion analysis.
Abstract: The KinectTM sensors can be used as cost effective and easy to use Markerless Motion Capture devices Therefore a wide range of new potential applications are possible Unfortunately, right now, the stick model skeleton provided by the KinectTM is only composed of 20 points located approximately at the joint level of the subject which movements are being captured by the camera This relatively limited amount of key points is limiting the use of such devices to relatively crude motion assessment The field of motion analysis however is requesting more key points in order to represent motion according to clinical conventions based on so-called anatomical planes To extend the possibility of the KinectTM supplementary data must be added to the available standard skeleton This paper presents a new Model-Based Approach (MBA) that has been specially developed for KinectTM input based on previous validated anatomical and biomechanical studies performed by the authors This approach allows real 3D motion analysis of complex movements respecting conventions expected in biomechanics and clinical motion analysis

Journal ArticleDOI
TL;DR: A high-resolution frequency analysis method, described as 3D nonharmonic analysis (NHA), which is only weakly influenced by the analysis window is proposed and the experimental results show that increasing the frequency resolution contributes to high-accuracy estimation of a motion plane.
Abstract: The spatiotemporal spectra of a video that contains a moving object form a plane in the 3D frequency domain This plane, which is described as the theoretical motion plane, reflects the velocity of the moving objects, which is calculated from the slope However, if the resolution of the frequency analysis method is not high enough to obtain actual spectra from the object signal, the spatiotemporal spectra disperse away from the theoretical motion plane In this paper, we propose a high-resolution frequency analysis method, described as 3D nonharmonic analysis (NHA), which is only weakly influenced by the analysis window In addition, we estimate the motion vectors of objects in a video using the plane-clustering method, in conjunction with the least-squares method, for 3D NHA spatiotemporal spectra We experimentally verify the accuracy of the 3D NHA and its usefulness for a sequence containing complex motions, such as cross-over motion, through comparison with 3D fast Fourier transform The experimental results show that increasing the frequency resolution contributes to high-accuracy estimation of a motion plane

Proceedings ArticleDOI
30 May 2013
TL;DR: This paper proposes a new approach, based on motion analysis, to detect moving pedestrians by localizing moving objects by motion segmentation on an optical flow field as a preprocessing step so as to significantly reduce the number of detection windows needed to be evaluated by a subsequent people classifier, resulting in a fast method for real-time systems.
Abstract: The detection of moving pedestrians is of major importance in the area of robot vision, since information about such persons and their tracks should be incorporated into reliable collision avoidance algorithms. In this paper, we propose a new approach, based on motion analysis, to detect moving pedestrians. Our main contribution is to localize moving objects by motion segmentation on an optical flow field as a preprocessing step, so as to significantly reduce the number of detection windows needed to be evaluated by a subsequent people classifier, resulting in a fast method for real-time systems. Therefore, we align detection windows with segmented motion-blobs using a height-prior rule. Finally, we apply a Histogram of Oriented Gradients (HOG) features based Support Vector Machine with Radial Basis Function kernel (RBF-SVM) to estimate a confidence for each detection window, and thereby locate potential pedestrians inside the segmented blobs. Experimental results on “Daimler mono moving pedestrian detection” benchmark show that our approach obtains a log-average miss rate of 43% in the FPPI range [10-2, 100], which is a clear improvement with respect to the naive HOG+linSVM approach and better than several other state-of-the-art detectors. Moreover, our approach also reduces runtime per frame by an order of magnitude.

Journal ArticleDOI
TL;DR: Practical example of learning the ball throwing is used to demonstrate the ability of the proposed approach to measure the changes in motor functions and distinguish their performance on different stages of the learning process.
Abstract: The present paper is devoted to the problem of measuring and modelling the changes in human motor functions. Nowadays, overwhelming majority of techniques for motion analysis and gesture recognition are based on feature extraction, pattern recognition and clustering. An alternative approach to measure and model changes in motor functions is proposed. Unlike feature extraction or pattern recognition techniques, the proposed approach concentrates its attention on the total quantity and smoothness of the human limb movements. The latter constitutes the main distinctive feature of the proposed technique. When changes of human motor functions are caused by learning of a new motor activity, amount and smoothness of the movements may provide necessary information to measure the effectiveness of the training technique. The notion “motion mass” is introduced as a measure associated with the motion, which describes how much and how smoothly certain joints have moved. Practical example of learning the ball throwing is used to demonstrate the ability of the proposed approach to measure the changes in motor functions and distinguish their performance on different stages of the learning process.

Journal ArticleDOI
TL;DR: This work is intended not only to test the performance of the DPSO method, but also to present a novel study in this field by identifying a putting “signature” of each player.
Abstract: This paper presents a methodology for visual detection and parameter estimation to analyze the effects of the variability in the performance of golf putting. A digital camera was used in each trial to track the putt gesture. The detection of the horizontal position of the golf club was performed using a computer vision technique, followed by an estimation algorithm divided in two different stages. On a first stage, diverse nonlinear estimation techniques were used and evaluated to extract a sinusoidal model of each trial. Secondly, several expert golf player trials were analyzed and, based on the results of the first stage, the Darwinian particle swarm optimization (DPSO) technique was employed to obtain a complete kinematical analysis and a characterization of each player's putting technique. In this work, it is intended not only to test the performance of the DPSO method, but also to present a novel study in this field by identifying a putting "signature" of each player.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: A novel approach of using video to capture multipage documents by combining video motion analysis, inertial sensor signals, and an image quality prediction technique to select the best page images from the video.
Abstract: This paper presents a mobile application for capturing images of printed multi-page documents with a smartphone camera. With today's available document capture applications, the user has to carefully capture individual photographs of each page and assemble them into a document, leading to a cumbersome and time consuming user experience. We propose a novel approach of using video to capture multipage documents. Our algorithm automatically selects the best still images corresponding to individual pages of the document from the video. The technique combines video motion analysis, inertial sensor signals, and an image quality (IQ) prediction technique to select the best page images from the video. For the latter, we extend a previous no-reference IQ prediction algorithm to suit the needs of our video application. The algorithm has been implemented on an iPhone 4S. Individual pages are successfully extracted for a wide variety of multi-page documents. OCR analysis shows that the quality of document images produced by our app is comparable to that of standard still captures. At the same time, user studies confirm that in the majority of trials, video capture provides an experience that is faster and more convenient than multiple still captures.