scispace - formally typeset
Search or ask a question

Showing papers on "Inertial measurement unit published in 2022"


Journal ArticleDOI
TL;DR: A method for the IMU and automotive onboard sensors to estimate the yaw misalignment autonomously through the piece-wise constant system (PWCS) and singular value decomposition (SVD) theory is proposed.

39 citations


Journal ArticleDOI
TL;DR: The utilisation of the fusion approach presented here warrants further investigation in those with neurological conditions, which could significantly contribute to the current understanding of impaired gait.

34 citations


Journal ArticleDOI
TL;DR: This paper provides a comprehensive review on the state-of-the-art embedded sensors, communication technologies, computing platforms and machine learning techniques used in autonomous UAVs.
Abstract: Unmanned aerial vehicles (UAVs) are increasingly becoming popular due to their use in many commercial and military applications, and their affordability. The UAVs are equipped with various sensors, hardware platforms and software technologies which enable them to support the diverse application portfolio. Sensors include vision-based sensors such as RGB-D cameras, thermal cameras, light detection and ranging (LiDAR), mmWave radars, ultrasonic sensors, and an inertial measurement unit (IMU) which enable UAVs for autonomous navigation, obstacle detection, collision avoidance, object tracking and aerial inspection. To enable smooth operation, UAVs utilize a number of communication technologies such as wireless fidelity (Wi-Fi), long range (LoRa), long-term evolution for machine-type communication (LTE-M), etc., along with various machine learning algorithms. However, each of these different technologies come with their own set of advantages and challenges. Hence, it is essential to have an overview of the different type of sensors, computing and communication modules and algorithms used for UAVs. This paper provides a comprehensive review on the state-of-the-art embedded sensors, communication technologies, computing platforms and machine learning techniques used in autonomous UAVs. The key performance metrics along with operating principles and a detailed comparative study of the various technologies are also studied and presented. The information gathered in this paper aims to serve as a practical reference guide for designing smart sensing applications, low-latency and energy efficient communication strategies, power efficient computing modules and machine learning algorithms for autonomous UAVs. Finally, some of the open issues and challenges for future research and development are also discussed.

32 citations


Journal ArticleDOI
TL;DR: In this paper , the integration of four sensors families is considered: sensors for precise absolute positioning (Global Navigation Satellite System (GNSS) receivers and Inertial Measurement Unit (IMU)), visual sensors (monocular and stereo cameras), audio sensors (microphones), and sensors for remote-sensing (RADAR and LiDAR).
Abstract: Autonomous ships are expected to improve the level of safety and efficiency in future maritime navigation. Such vessels need perception for two purposes: to perform autonomous situational awareness and to monitor the integrity of the sensor system itself. In order to meet these needs, the perception system must fuse data from novel and traditional perception sensors using Artificial Intelligence (AI) techniques. This article overviews the recognized operational requirements that are imposed on regular and autonomous seafaring vessels, and then proceeds to consider suitable sensors and relevant AI techniques for an operational sensor system. The integration of four sensors families is considered: sensors for precise absolute positioning (Global Navigation Satellite System (GNSS) receivers and Inertial Measurement Unit (IMU)), visual sensors (monocular and stereo cameras), audio sensors (microphones), and sensors for remote-sensing (RADAR and LiDAR). Additionally, sources of auxiliary data, such as Automatic Identification System (AIS) and external data archives are discussed. The perception tasks are related to well-defined problems, such as situational abnormality detection, vessel classification, and localization, that are solvable using AI techniques. Machine learning methods, such as deep learning and Gaussian processes, are identified to be especially relevant for these problems. The different sensors and AI techniques are characterized keeping in view the operational requirements, and some example state-of-the-art options are compared based on accuracy, complexity, required resources, compatibility and adaptability to maritime environment, and especially towards practical realization of autonomous systems.

30 citations


Journal ArticleDOI
TL;DR: OpenSenseRT as discussed by the authors is an open-source and wearable system that estimates upper and lower extremity kinematics in real time by using inertial measurement units and a portable microcontroller.
Abstract: Analyzing human motion is essential for diagnosing movement disorders and guiding rehabilitation for conditions like osteoarthritis, stroke, and Parkinson's disease. Optical motion capture systems are the standard for estimating kinematics, but the equipment is expensive and requires a predefined space. While wearable sensor systems can estimate kinematics in any environment, existing systems are generally less accurate than optical motion capture. Many wearable sensor systems require a computer in close proximity and use proprietary software, limiting experimental reproducibility.Here, we present OpenSenseRT, an open-source and wearable system that estimates upper and lower extremity kinematics in real time by using inertial measurement units and a portable microcontroller.We compared the OpenSenseRT system to optical motion capture and found an average RMSE of 4.4 degrees across 5 lower-limb joint angles during three minutes of walking and an average RMSE of 5.6 degrees across 8 upper extremity joint angles during a Fugl-Meyer task. The open-source software and hardware are scalable, tracking 1 to 14 body segments, with one sensor per segment. A musculoskeletal model and inverse kinematics solver estimate Kinematics in real-time. The computation frequency depends on the number of tracked segments, but is sufficient for real-time measurement for many tasks of interest; for example, the system can track 7 segments at 30 Hz in real-time. The system uses off-the-shelf parts costing approximately $100 USD plus $20 for each tracked segment.The OpenSenseRT system is validated against optical motion capture, low-cost, and simple to replicate, enabling movement analysis in clinics, homes, and free-living settings.

29 citations


Journal ArticleDOI
TL;DR: In this article , a method for the inertial measurement unit (IMU) and automotive onboard sensors to estimate the yaw misalignment autonomously is proposed. But the method is limited to the case where the vehicle is equipped with an IMU and it is difficult to measure directly.

29 citations


Journal ArticleDOI
TL;DR: In this paper , the state-of-the-art studies of visual and visual-based (i.e., visual-inertial, visual-LIDAR, visual LIDAR-IMU) SLAM are completely reviewed, as well the positioning accuracy of previous work are compared with the well-known frameworks on the public datasets.

29 citations


Journal ArticleDOI
25 Mar 2022-Sensors
TL;DR: In this paper , the authors studied the potential of a low earth orbit (LEO) satellite communication system for a high-dynamic application, when it is integrated with an inertial measurement unit (IMU) and magnetometers.
Abstract: Resilient navigation in Global Navigation Satellite System (GNSS)-degraded and -denied environments is becoming more and more required for many applications. It can typically be based on multi-sensor data fusion that relies on alternative technologies to GNSS. In this work, we studied the potential of a low earth orbit (LEO) satellite communication system for a high-dynamic application, when it is integrated with an inertial measurement unit (IMU) and magnetometers. We derived the influence of the main error sources that affect the LEO space vehicle (SV) Doppler-based navigation on both positioning and attitude estimations. This allowed us to determine the best, intermediate and worst cases of navigation performances. We show that while the positioning error is large due to large orbit errors or high SV clock drifts, it becomes competitive with that of an inertial navigation system (INS) based on a better quality IMU if precise satellite orbits are available. On the other hand, the attitude estimation tolerates large orbit errors and high SV clock drifts. The obtained results suggest that LEO SV signals, used as signals of opportunity for navigation, are an attractive alternative in GNSS-denied environments for high dynamic vehicles.

25 citations


Journal ArticleDOI
TL;DR: In this paper , the authors evaluated the performance of the iPhone 12 Pro for orientation and raster image data capture and found that the performance was comparable to analog compass-clinometers and reflex/mirrorless cameras.

24 citations


Journal ArticleDOI
TL;DR: In this article , a multi-layer fusion framework with sensor, data and gait characteristics is proposed to extract informative gait features and interpret them with validated algorithms. But, the authors did not consider the impact of gait abnormalities on the performance of stroke survivors.

23 citations


Journal ArticleDOI
01 Jan 2022
TL;DR: In this paper , a deep learning model named MBiGRU (multimodal bidirectional gated recurrent unit) neural network was proposed to recognize everyday sport-related actions, with the publicly accessible UCI-DSADS dataset utilized as a benchmark to compare the effectiveness of the proposed deep learning network against other deep learning architectures (CNNs and GRUs).
Abstract: Numerous learning-based techniques for effective human activity recognition (HAR) have recently been developed. Wearable inertial sensors are critical for HAR studies to characterize sport-related activities. Smart wearables are now ubiquitous and can benefit people of all ages. HAR investigations typically involve sensor-based evaluation. Sport-related activities are unpredictable and have historically been classified as complex, with conventional machine learning (ML) algorithms applied to resolve HAR issues. The efficiency of machine learning techniques in categorizing data is limited by the human-crafted feature extraction procedure. A deep learning model named MBiGRU (multimodal bidirectional gated recurrent unit) neural network was proposed to recognize everyday sport-related actions, with the publicly accessible UCI-DSADS dataset utilized as a benchmark to compare the effectiveness of the proposed deep learning network against other deep learning architectures (CNNs and GRUs). Experiments were performed to quantify four evaluation criteria as accuracy, precision, recall and F1-score. Following a 10-fold cross-validation approach, the experimental findings indicated that the MBiGRU model presented superior accuracy of 99.55% against other benchmark deep learning networks. The available evidence was also evaluated to explore ways to enhance the proposed model and training procedure.

Journal ArticleDOI
01 Apr 2022
TL;DR: M2DGR as discussed by the authors is a large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals.
Abstract: We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals. All those sensors were well-calibrated and synchronized, and their data were recorded simultaneously. The ground truth trajectories were obtained by the motion capture device, a laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences (about 1 TB) captured in diverse scenarios including both indoor and outdoor environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results show that existing solutions perform poorly in some scenarios. For the benefit of the research community, we make the dataset and tools public. The webpage of our project is https://github.com/SJTU-ViSYS/M2DGR .

Journal ArticleDOI
TL;DR: In this paper , a smart watch-based system was used to predict fall detection. But the accuracy of the fall detection was only 99.59% and 97.35% when considering only binary classification (falling vs all other activities), perfect accuracy was achieved when considering all activities.

Journal ArticleDOI
TL;DR: In this paper, a smart watch-based system was used to predict fall detection. But the accuracy of the fall detection was only 99.59% and 97.35% when considering only binary classification (falling vs all other activities), perfect accuracy was achieved when considering all activities.

Journal ArticleDOI
23 Feb 2022-Sensors
TL;DR: It is suggested that future studies using IMUs to record running biomechanics have mainly been conducted indoors, on a treadmill, at prescribed speeds, and over small distances should move out of the lab to less controlled and more real-world environments.
Abstract: Inertial measurement units (IMUs) can be used to monitor running biomechanics in real-world settings, but IMUs are often used within a laboratory. The purpose of this scoping review was to describe how IMUs are used to record running biomechanics in both laboratory and real-world conditions. We included peer-reviewed journal articles that used IMUs to assess gait quality during running. We extracted data on running conditions (indoor/outdoor, surface, speed, and distance), device type and location, metrics, participants, and purpose and study design. A total of 231 studies were included. Most (72%) studies were conducted indoors; and in 67% of all studies, the analyzed distance was only one step or stride or <200 m. The most common device type and location combination was a triaxial accelerometer on the shank (18% of device and location combinations). The most common analyzed metric was vertical/axial magnitude, which was reported in 64% of all studies. Most studies (56%) included recreational runners. For the past 20 years, studies using IMUs to record running biomechanics have mainly been conducted indoors, on a treadmill, at prescribed speeds, and over small distances. We suggest that future studies should move out of the lab to less controlled and more real-world environments.

Journal ArticleDOI
TL;DR: A deep learning network architecture named GPS/INS neural network (GI-NN) is proposed in this paper to assist the low cost inertial navigation system and illustrate that the proposed method can provide more accurate and reliable navigation solutions in GPS denied environments.
Abstract: The low cost inertial navigation system (INS) suffers from bias and measurement noise, which would result in poor navigation accuracy during global positioning system (GPS) outages. Aiming to bridge GPS outages duration and enhance the navigation performance, a deep learning network architecture named GPS/INS neural network (GI-NN) is proposed in this paper to assist the INS. The GI-NN combines a convolutional neural network and a gated recurrent unit neural network to extract spatial features from inertial measurement unit (IMU) signals and track their temporal characteristics. The relationship among the attitude, specific force, angular rate and the GPS position increment is modelled, while the current and previous IMU data are used to estimate the dynamics of the vehicle by GI-NN. Numerical simulations, real field tests and public data tests are performed to evaluate the effectiveness of the proposed algorithm. Compared with the traditional machine learning algorithms, the results illustrate that the proposed method can provide more accurate and reliable navigation solutions in GPS denied environments.

Journal ArticleDOI
01 May 2022-Sensors
TL;DR: The conceptual model of approaching ML that is proposed could reduce the risk of overrepresenting multicollinear gait features in the model, reducing therisk of overfitting in the test performances while fostering the explainability of the results.
Abstract: The aim of this study was to determine which supervised machine learning (ML) algorithm can most accurately classify people with Parkinson’s disease (pwPD) from speed-matched healthy subjects (HS) based on a selected minimum set of IMU-derived gait features. Twenty-two gait features were extrapolated from the trunk acceleration patterns of 81 pwPD and 80 HS, including spatiotemporal, pelvic kinematics, and acceleration-derived gait stability indexes. After a three-level feature selection procedure, seven gait features were considered for implementing five ML algorithms: support vector machine (SVM), artificial neural network, decision trees (DT), random forest (RF), and K-nearest neighbors. Accuracy, precision, recall, and F1 score were calculated. SVM, DT, and RF showed the best classification performances, with prediction accuracy higher than 80% on the test set. The conceptual model of approaching ML that we proposed could reduce the risk of overrepresenting multicollinear gait features in the model, reducing the risk of overfitting in the test performances while fostering the explainability of the results.

Journal ArticleDOI
TL;DR: In this paper , a knee rehabilitation robot that works with surface EMG (sEMG) signals was designed and built, where the muscle forces were estimated from sEMG signals using several machine learning techniques, i.e. support vector machine (SVM), support vector regression (SVR), and random forest (RF).
Abstract: The main objective of this work is to establish a framework for processing and evaluating the lower limb electromyography (EMG) signals ready to be fed to a rehabilitation robot. We design and build a knee rehabilitation robot that works with surface EMG (sEMG) signals. In our device, the muscle forces are estimated from sEMG signals using several machine learning techniques, i.e. support vector machine (SVM), support vector regression (SVR) and random forest (RF). In order to improve the estimation accuracy, we devise genetic algorithm (GA) for parameter optimisation and feature extraction within the proposed methods. At the same time, a load cell and a wearable inertial measurement unit (IMU) are mounted on the robot to measure the muscle force and knee joint angle, respectively. Various performance measures have been employed to assess the performance of the proposed system. Our extensive experiments and comparison with related works revealed a high estimation accuracy of 98.67% for lower limb muscles. The main advantage of the proposed techniques is high estimation accuracy leading to improved performance of the therapy while muscle models become especially sensitive to the tendon stiffness and the slack length. Graphical Abstract.

Journal ArticleDOI
TL;DR: In this paper , a water segmentation and refinement (WaSR) network is proposed for obstacle detection using semantic segmentation in an aquatic environment, which improves the segmentation accuracy of the water component in the presence of visual ambiguities.
Abstract: Obstacle detection using semantic segmentation has become an established approach in autonomous vehicles. However, existing segmentation methods, primarily developed for ground vehicles, are inadequate in an aquatic environment as they produce many false positive (FP) detections in the presence of water reflections and wakes. We propose a novel deep encoder–decoder architecture, a water segmentation and refinement (WaSR) network, specifically designed for the marine environment to address these issues. A deep encoder based on ResNet101 with atrous convolutions enables the extraction of rich visual features, while a novel decoder gradually fuses them with inertial information from the inertial measurement unit (IMU). The inertial information greatly improves the segmentation accuracy of the water component in the presence of visual ambiguities, such as fog on the horizon. Furthermore, a novel loss function for semantic separation is proposed to enforce the separation of different semantic components to increase the robustness of the segmentation. We investigate different loss variants and observe a significant reduction in FPs and an increase in true positives (TPs). Experimental results show that WaSR outperforms the current state of the art by approximately 4% in F1 score on a challenging unmanned surface vehicle dataset. WaSR shows remarkable generalization capabilities and outperforms the state of the art by over 24% in F1 score on a strict domain generalization experiment.

Journal ArticleDOI
TL;DR: SelfVIO as mentioned in this paper uses adversarial training and self-adaptive visual-inertial sensor fusion to learn the joint estimation of 6 degrees-of-freedom ego-motion and a depth map of the scene from unlabeled monocular RGB image sequences and inertial measurement unit (IMU) readings.

Journal ArticleDOI
25 Mar 2022-Sensors
TL;DR: A hybrid hand gesture system that combines an inertial measurement unit (IMU)-based motion capture system and a vision-based gesture system to increase real-time performance is proposed and proves that it is a safer and more intuitive HUI design with a 0.089 ms processing speed and average lap time that takes about 19 s less than the joystick controller.
Abstract: As an alternative to traditional remote controller, research on vision-based hand gesture recognition is being actively conducted in the field of interaction between human and unmanned aerial vehicle (UAV). However, vision-based gesture system has a challenging problem in recognizing the motion of dynamic gesture because it is difficult to estimate the pose of multi-dimensional hand gestures in 2D images. This leads to complex algorithms, including tracking in addition to detection, to recognize dynamic gestures, but they are not suitable for human–UAV interaction (HUI) systems that require safe design with high real-time performance. Therefore, in this paper, we propose a hybrid hand gesture system that combines an inertial measurement unit (IMU)-based motion capture system and a vision-based gesture system to increase real-time performance. First, IMU-based commands and vision-based commands are divided according to whether drone operation commands are continuously input. Second, IMU-based control commands are intuitively mapped to allow the UAV to move in the same direction by utilizing estimated orientation sensed by a thumb-mounted micro-IMU, and vision-based control commands are mapped with hand’s appearance through real-time object detection. The proposed system is verified in a simulation environment through efficiency evaluation with dynamic gestures of the existing vision-based system in addition to usability comparison with traditional joystick controller conducted for applicants with no experience in manipulation. As a result, it proves that it is a safer and more intuitive HUI design with a 0.089 ms processing speed and average lap time that takes about 19 s less than the joystick controller. In other words, it shows that it is viable as an alternative to existing HUI.

Journal ArticleDOI
TL;DR: A review of various calibration techniques of MEMS inertial sensors is presented in this article , where the authors summarize the calibration schemes into two general categories: autonomous and non-autonomous calibration.
Abstract: A review of various calibration techniques of MEMS inertial sensors is presented in this paper. MEMS inertial sensors are subject to various sources of error, so it is essential to correct these errors through calibration techniques to improve the accuracy and reliability of these sensors. In this paper, we first briefly describe the main characteristics of MEMS inertial sensors and then discuss some common error sources and the establishment of error models. A systematic review of calibration methods for inertial sensors, including gyroscopes and accelerometers, is conducted. We summarize the calibration schemes into two general categories: autonomous and nonautonomous calibration. A comprehensive overview of the latest progress made in MEMS inertial sensor calibration technology is presented, and the current state of the art and development prospects of MEMS inertial sensor calibration are analyzed with the aim of providing a reference for the future development of calibration technology.

Journal ArticleDOI
TL;DR: In this paper , an enabling event-triggered side-lip angle estimator is proposed by using the kinematic information from a low-cost global positioning system (GPS) and an on-board inertial measurement unit (IMU).
Abstract: Accurate vehicle sideslip angle estimation is crucial for vehicle stability control. In this article, an enabling event-triggered sideslip angle estimator is proposed by using the kinematic information from a low-cost global positioning system (GPS) and an on-board inertial measurement unit (IMU). First, a preliminary vehicle sideslip angle is derived using the heading angle of GPS and the yaw rate of IMU, and an event-triggered mechanism is proposed to eliminate the accumulative estimation error. The algorithm convergence is guaranteed through theoretical deduction. Second, a longitudinal and a lateral vehicle velocity are obtained using the preliminary vehicle sideslip angle and the measured GPS velocity and their kinematic relationship, based on which a multisensor fusion and a multistep Kalman filter scheme are, respectively, presented to realize longitudinal and lateral vehicle velocity estimation. By doing this, the update frequency and estimation accuracy of the vehicle sideslip angle estimate can be further improved to meet the requirement of online implementation. Finally, the effectiveness and reliability of the proposed scheme are verified under comprehensive driving conditions through both hardware-in-loop (HIL) and field tests. The results show that the proposed event-triggered sideslip angle estimator has a mean estimation error of 0.029 $^\circ$ and of 0.14 $^\circ$ in the HIL and field tests, exhibiting better estimation accuracy, reliability, and real-time performance compared with other typical estimators.

Journal ArticleDOI
TL;DR: In this article , a fully portable photonic smart garment with 30 multiplexed polymer optical fiber (POF) sensors combined with Artificial Intelligence (AI) algorithms was developed to evaluate the system ability on the activity classification of multiple subjects.
Abstract: Smart textiles are novel solutions for remote healthcare monitoring which involve non-invasive sensors-integrated clothing. Polymer optical fiber (POF) sensors have attractive features for smart textile technology, and combined with Artificial Intelligence (AI) algorithms increase the potential of intelligent decision-making. This paper presents the development of a fully portable photonic smart garment with 30 multiplexed POF sensors combined with AI algorithms to evaluate the system ability on the activity classification of multiple subjects. Six daily activities are evaluated: standing, sitting, squatting, up-and-down arms, walking and running. A k-nearest neighbors classifier is employed and results from 10 trials of all volunteers presented an accuracy of 94.00 (0.14)%. To achieve an optimal amount of sensors, the principal component analysis is used for one volunteer and results showed an accuracy of 98.14 (0.31)% using 10 sensors, 1.82% lower than using 30 sensors. Cadence and breathing rate were estimated and compared to the data from an inertial measurement unit located on the garment back and the highest error was 2.22%. Shoulder flexion/extension was also evaluated. The proposed approach presented feasibility for activity recognition and movement-related parameters extraction, leading to a system fully optimized, including the number of sensors and wireless communication, for Healthcare 4.0.

Journal ArticleDOI
TL;DR: An efficient human activity recognition (HAR) model is presented based on efficient handcrafted features and Random Forest as a classifier for IoHT applications and results ensure the superiority of the applied model over others introduced in the literature for the same dataset.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a feasible fusion framework by utilizing a particle filter to integrate data-driven inertial navigation with localization based on Bluetooth Low Energy (BLE), which can further improve localization accuracy on the basis of existing fusion method.
Abstract: The introduction of data-driven inertial navigation provides new opportunities that the pedestrian dead reckoning could not well provide for constraining inertial system error drift on smartphones, and has been considered as another promising approach to meet the requirement of location-based services. However, indoor localization systems based on a single technology still have their limitations, such as the drift of inertial navigation and the received signal strength fluctuation of Bluetooth, making them unable to provide reliable positioning. To exploit the complementary strengths of each technology, this paper proposes a feasible fusion framework by utilizing a particle filter to integrate data-driven inertial navigation with localization based on Bluetooth Low Energy (BLE). For data-driven inertial navigation, under the premise of using the deep neural network with great potential in model-free generalization to regress pedestrian motion characteristics, we effectively combined the method of using gravity to stabilize inertial measurement units data to make the network more robust. Experimental results show that in the test of different smartphone usages, the proposed data-driven inertial navigation and BLE-based localization technology have good results in modeling user’s movement and positioning respectively. And due to this, the proposed fusion algorithm has almost unaffected by the usages of smartphones. Compared with BLE-based localization that achieved a good mean positional error (MPE) of 1.76m, for the four usages of texting, swinging, calling and pocket, the proposed fusion algorithm reduced the MPE by 32.35%, 20.51%, 20.74%, and 45.37%, respectively, and can further improve localization accuracy on the basis of existing fusion method.

Journal ArticleDOI
01 Feb 2022-Sensors
TL;DR: A user-independent gesture classification method based on a sensor fusion technique that combines EMG data and inertial measurement unit (IMU) data is presented, suggesting that by using the proposed sensor fusion approach, it is possible to achieve a more natural interface that allows better control of wearable mechatronic devices during robot assisted therapies.
Abstract: Recently, it has been proven that targeting motor impairments as early as possible while using wearable mechatronic devices for assisted therapy can improve rehabilitation outcomes. However, despite the advanced progress on control methods for wearable mechatronic devices, the need for a more natural interface that allows for better control remains. To address this issue, electromyography (EMG)-based gesture recognition systems have been studied as a potential solution for human–machine interface applications. Recent studies have focused on developing user-independent gesture recognition interfaces to reduce calibration times for new users. Unfortunately, given the stochastic nature of EMG signals, the performance of these interfaces is negatively impacted. To address this issue, this work presents a user-independent gesture classification method based on a sensor fusion technique that combines EMG data and inertial measurement unit (IMU) data. The Myo Armband was used to measure muscle activity and motion data from healthy subjects. Participants were asked to perform seven types of gestures in four different arm positions while using the Myo on their dominant limb. Data obtained from 22 participants were used to classify the gestures using three different classification methods. Overall, average classification accuracies in the range of 67.5–84.6% were obtained, with the Adaptive Least-Squares Support Vector Machine model obtaining accuracies as high as 92.9%. These results suggest that by using the proposed sensor fusion approach, it is possible to achieve a more natural interface that allows better control of wearable mechatronic devices during robot assisted therapies.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a depth data-guided framework based on smartphones for complex human activity recognition and automatic labeling, which consists of five clustering layers and deep learning based classification model to identify 12 complex daily activities.

Journal ArticleDOI
05 Jan 2022-Sensors
TL;DR: In this article , an autonomous system for unmanned aerial vehicles (UAVs) to land on moving platforms such as an automobile or a marine vessel is proposed. But unlike most state-of-the-art UAV landing frameworks that rely on UAV onboard computers and sensors, the proposed system fully depends on the computation unit situated on the ground vehicle/marine vessel to serve as a landing guidance system.
Abstract: This work aimed to develop an autonomous system for unmanned aerial vehicles (UAVs) to land on moving platforms such as an automobile or a marine vessel, providing a promising solution for a long-endurance flight operation, a large mission coverage range, and a convenient recharging ground station. Unlike most state-of-the-art UAV landing frameworks that rely on UAV onboard computers and sensors, the proposed system fully depends on the computation unit situated on the ground vehicle/marine vessel to serve as a landing guidance system. Such a novel configuration can therefore lighten the burden of the UAV, and the computation power of the ground vehicle/marine vessel can be enhanced. In particular, we exploit a sensor fusion-based algorithm for the guidance system to perform UAV localization, whilst a control method based upon trajectory optimization is integrated. Indoor and outdoor experiments are conducted, and the results show that precise autonomous landing on a 43 cm × 43 cm platform can be performed.

Journal ArticleDOI
TL;DR: In this article , a tag-based visual-inertial localization method for off-the-shelf UAVs with only a camera and an inertial measurement unit (IMU) is proposed.