scispace - formally typeset
Search or ask a question

Showing papers by "Sakorn Mekruksavanich published in 2021"


Journal ArticleDOI
26 Feb 2021-Sensors
TL;DR: In this article, the authors proposed a generic HAR framework for smartphone sensor data, based on Long Short-Term Memory (LSTM) networks for time-series domains, and a hybrid LSTM network was proposed to improve recognition performance.
Abstract: Human Activity Recognition (HAR) employing inertial motion data has gained considerable momentum in recent years, both in research and industrial applications. From the abstract perspective, this has been driven by an acceleration in the building of intelligent and smart environments and systems that cover all aspects of human life including healthcare, sports, manufacturing, commerce, etc. Such environments and systems necessitate and subsume activity recognition, aimed at recognizing the actions, characteristics, and goals of one or more individuals from a temporal series of observations streamed from one or more sensors. Due to the reliance of conventional Machine Learning (ML) techniques on handcrafted features in the extraction process, current research suggests that deep-learning approaches are more applicable to automated feature extraction from raw sensor data. In this work, the generic HAR framework for smartphone sensor data is proposed, based on Long Short-Term Memory (LSTM) networks for time-series domains. Four baseline LSTM networks are comparatively studied to analyze the impact of using different kinds of smartphone sensor data. In addition, a hybrid LSTM network called 4-layer CNN-LSTM is proposed to improve recognition performance. The HAR method is evaluated on a public smartphone-based dataset of UCI-HAR through various combinations of sample generation processes (OW and NOW) and validation protocols (10-fold and LOSO cross validation). Moreover, Bayesian optimization techniques are used in this study since they are advantageous for tuning the hyperparameters of each LSTM network. The experimental results indicate that the proposed 4-layer CNN-LSTM network performs well in activity recognition, enhancing the average accuracy by up to 2.24% compared to prior state-of-the-art approaches.

106 citations


Journal ArticleDOI
TL;DR: A novel framework for multi-class wearable user identification, with a basis in the recognition of human behavior through the use of deep learning models, is presented, and the proposed framework’s effectiveness was demonstrated.
Abstract: Currently, a significant amount of interest is focused on research in the field of Human Activity Recognition (HAR) as a result of the wide variety of its practical uses in real-world applications, such as biometric user identification, health monitoring of the elderly, and surveillance by authorities. The widespread use of wearable sensor devices and the Internet of Things (IoT) has led the topic of HAR to become a significant subject in areas of mobile and ubiquitous computing. In recent years, the most widely-used inference and problem-solving approach in the HAR system has been deep learning. Nevertheless, major challenges exist with regard to the application of HAR for problems in biometric user identification in which various human behaviors can be regarded as types of biometric qualities and used for identifying people. In this research study, a novel framework for multi-class wearable user identification, with a basis in the recognition of human behavior through the use of deep learning models, is presented. In order to obtain advanced information regarding users during the performance of various activities, sensory data from tri-axial gyroscopes and tri-axial accelerometers of the wearable devices are applied. Additionally, a set of experiments were shown to validate this work, and the proposed framework’s effectiveness was demonstrated. The results for the two basic models, namely, the Convolutional Neural Network (CNN) and the Long Short-Term Memory (LSTM) deep learning, showed that the highest accuracy for all users was 91.77% and 92.43%, respectively. With regard to the biometric user identification, these are both acceptable levels.

92 citations


Journal ArticleDOI
TL;DR: Experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy and achieved the highest recognition performance in other scenarios, as well as a variety of performance indicators, including accuracy, F1-score, and confusion matrix.
Abstract: Sensor-based human activity recognition (S-HAR) has become an important and high-impact topic of research within human-centered computing. In the last decade, successful applications of S-HAR have been presented through fruitful academic research and industrial applications, including for healthcare monitoring, smart home controlling, and daily sport tracking. However, the growing requirements of many current applications for recognizing complex human activities (CHA) have begun to attract the attention of the HAR research field when compared with simple human activities (SHA). S-HAR has shown that deep learning (DL), a type of machine learning based on complicated artificial neural networks, has a significant degree of recognition efficiency. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are two different types of DL methods that have been successfully applied to the S-HAR challenge in recent years. In this paper, we focused on four RNN-based DL models (LSTMs, BiLSTMs, GRUs, and BiGRUs) that performed complex activity recognition tasks. The efficiency of four hybrid DL models that combine convolutional layers with the efficient RNN-based models was also studied. Experimental studies on the UTwente dataset demonstrated that the suggested hybrid RNN-based models achieved a high level of recognition performance along with a variety of performance indicators, including accuracy, F1-score, and confusion matrix. The experimental results show that the hybrid DL model called CNN-BiGRU outperformed the other DL models with a high accuracy of 98.89% when using only complex activity data. Moreover, the CNN-BiGRU model also achieved the highest recognition performance in other scenarios (99.44% by using only simple activity data and 98.78% with a combination of simple and complex activities).

51 citations


Proceedings ArticleDOI
03 Mar 2021
TL;DR: In this paper, a hybrid model called a multichannel CNN-LSTM network was proposed to solve the human behavior recognition problem in the context of smartwatch accelerometer data.
Abstract: Recognition of human behavior is recently an active and stimulating study. The HAR can provide valuable information on human movement and the behavior of everyday life activities. In the last decade, a large range of HAR-based applications have been implemented, such as healthcare tracking, biometric user authentication, and so on. Previously, several deep learning approaches have been introduced to focus on the issue of conventional machine learning approaches with handcrafted features. So, a novel deep learning architecture to solve the HAR problem is proposed in this study. The introduced architecture is a hybrid model called a multichannel CNN-LSTM network. The model is evaluated by state-of-the-art evaluation metrics; accuracy, precision, recall and F1-score, with a public dataset of smartwatch’s accelerometer data called DHA dataset. The proposed multichannel CNN-LSTM outperforms other deep learning methods in terms of accuracy, with a score of 96.87%.

39 citations


Proceedings ArticleDOI
20 Aug 2021
TL;DR: In this article, three deep learning models were used to investigate real-life activities using smartphone sensors in this study, and the Att-CNN-LSTM network was introduced as a hybrid DL model to handle the human activity recognition challenge using an attention mechanism.
Abstract: Due to its vast applications in various industrial sectors, sensor-based human activity recognition (SHAR) has become a prevalent study issue in machine learning (ML) and deep learning (DL). With the improvement of numerous wearable sensors, many effective use cases have recently been revealed. According to recent research, real-world data contains more contextual information than data acquired in a laboratory environment. Three deep learning models were used to investigate real-life activities using smartphone sensors in this study. As two fundamental deep learning approaches, a convolutional neural network (CNN) and a long short-term memory (LSTM) network are used to achieve recognition. In addition, we introduced the Att-CNN-LSTM network as a hybrid DL model to handle the SHAR challenge using an attention mechanism. On a public dataset called real-life HAR (RL-HAR), these three deep learning models were evaluated using four assessment indicators: accuracy, precision, recall, and F1-score. According to the experimental data, the suggested Att-CNN-LSTM surpasses existing baseline deep learning models with the highest average accuracy of 95.76%.

38 citations


Journal ArticleDOI
12 Nov 2021-Sensors
TL;DR: DeepAuthen as mentioned in this paper identifies smartphone users based on their physical activity patterns as measured by the accelerometer, gyroscope, and magnetometer sensors on their smartphone and conducts a series of tests on user authentication using several deep learning classifiers, including DeepConvLSTM on the three benchmark datasets UCI-HAR, WISDM-HARB and HMOG.
Abstract: Smartphones as ubiquitous gadgets are rapidly becoming more intelligent and context-aware as sensing, networking, and processing capabilities advance. These devices provide users with a comprehensive platform to undertake activities such as socializing, communicating, sending and receiving e-mails, and storing and accessing personal data at any time and from any location. Nowadays, smartphones are used to store a multitude of private and sensitive data including bank account information, personal identifiers, account passwords and credit card information. Many users remain permanently signed in and, as a result, their mobile devices are vulnerable to security and privacy risks through assaults by criminals. Passcodes, PINs, pattern locks, facial verification, and fingerprint scans are all susceptible to various assaults including smudge attacks, side-channel attacks, and shoulder-surfing attacks. To solve these issues, this research introduces a new continuous authentication framework called DeepAuthen, which identifies smartphone users based on their physical activity patterns as measured by the accelerometer, gyroscope, and magnetometer sensors on their smartphone. We conducted a series of tests on user authentication using several deep learning classifiers, including our proposed deep learning network termed DeepConvLSTM on the three benchmark datasets UCI-HAR, WISDM-HARB and HMOG. Results demonstrated that combining various motion sensor data obtained the highest accuracy and energy efficiency ratio (EER) values for binary classification. We also conducted a thorough examination of the continuous authentication outcomes, and the results supported the efficacy of our framework.

30 citations


Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, an architecture for the biometric user identification using walking patterns that employs a convolutional neural network is proposed, which is validated through the generation of synthetic data from datasets of samples from public walking.
Abstract: One classification problem that is especially challenging is biometric identification, which links cybersecurity to the analysis of human behavior. Biometric data can be collected through the use of wearable devices, especially smartphones that incorporate a variety of sensors, during the performance of activities by the users. In recent research, numerous identification systems using machine learning classification algorithms have been proposed to provide solutions to this classification problem. However, their ability to perform identification is limited to only suitable selected features of time-series data from the biometric raw data. Therefore, in this study, an architecture for the biometric user identification using walking patterns that employs a convolutional neural network is proposed. Validation of the proposed framework for biometric identification was accomplished through the generation of synthetic data from datasets of samples from public walking. As a result, the framework described in this study provides more effective outcomes in terms of the accuracy of the model and additional metrics than the conventional machine learning utilized for biometric user identification.

24 citations


Proceedings ArticleDOI
01 Sep 2021
TL;DR: In this article, a combination of two inertial measurement units outperforms employing either an accelerometer or a gyroscope by utilizing four deep learning classifiers to recognize complex human activity (CHA).
Abstract: The classification of simple and complex sequences of operations is made easier according to the use of heterogeneous sensors from a wearable device. Sensor-based human activity recognition (HAR) is being used in smartphone platforms for elderly healthcare monitoring, fall detection, and inappropriate behavior prevention, such as smoking habit, unhealthy eating, and lack of exercise. Common machine learning and deep learning techniques have recently been presented to tackle the HAR issue, with a focus on everyday activities, particularly general human activities including moving, sitting, and standing. However, there is an intriguing and challenging HAR research subjects involving more complicated psychological activities in various environments, including smoking, eating, and drinking. The use of heterogeneous sensor data to enhance recognition performance over sensor-based deep learning networks is considered in this work. We demonstrate that using a combination of two inertial measurement units outperforms employing either an accelerometer or a gyroscope by utilizing four deep learning classifiers to recognize complex human activity (CHA). Furthermore, we describe the impact of five window sizes (5s - 40s) on a publicly accessible benchmark dataset and how increasing window size effects to the classification performance of CHA deep learning networks.

17 citations


Proceedings ArticleDOI
26 Aug 2021
TL;DR: In this article, the authors proposed a new sensor-based HAR to classify complex activities with high performance using a DL model called InceptTime network, which is evaluated on a public benchmark complex activity dataset named PAMAP2.
Abstract: Effective human activity recognition can be incredibly beneficial in big data applications like ambient healthcare-supported living. Deep learning (DL) techniques have considerably advanced research in human activity recognition (HAR). These deep learning algorithms outperform conventional machine learning methods in terms of automatic feature extraction. Many deep learning models have recently been demonstrated as state-of-the-art approaches to efficiently classify simple and complex human behaviors to address the HAR. This work proposes a new sensor-based HAR to classify complex activities with high performance using a DL model called InceptTime network. The proposed architecture is evaluated on a public benchmark complex activity dataset named PAMAP2. The experimental outcomes show that the proposed InceptTime model is significantly better than other baseline D L models on the same dataset with the highest accuracy of 88 %.

7 citations


Proceedings ArticleDOI
30 Jun 2021
TL;DR: In this paper, a location-based CNN-LSTM hybrid model is proposed to solve the HAR effect on overall accuracy, which is validated using evaluation measures such as accuracy and other effective measures on the DHA dataset.
Abstract: Human activity recognition (HAR) is an interesting and challenging subject of study. HAR provides useful information regarding human movement and activity in ordinary life. A number of HAR-based solutions such as wellness tracking and biometric identification systems have been introduced over the past decade. A number of deep learning algorithms have recently been employed to resolve the complication of handcrafted features in traditional machine learning approaches. The novel deep learning framework to solve the HAR effect on overall accuracy is proposed in this study. The framework is a location-based CNN-LSTM hybrid model. The framework is validated using evaluation measures such as accuracy and other effective measures on a public dataset of wristwatch accelerometer data named the DHA dataset. When comparing the accuracy of alternative deep learning approaches, the proposed location-based CNN-LSTM ranked highest with an accuracy of 96.75%.

5 citations


Proceedings ArticleDOI
03 Mar 2021
TL;DR: In this paper, a comparative analysis of deep learning techniques for human activity recognition using sensor data captured from accelerometer and gyroscope is presented. And the results indicate that the hybrid LSTM outperforms baseline models.
Abstract: Due to rapid advancement of wearable sensor technology, Human Activity Recognition (HAR) using smartphone sensors data are becoming a trendy research topic. There are many mobile applications adopting from outstanding HAR researches such as health monitoring, performance tracking of sport, and etc. In the last decade, machine learning methods have been introduced to solve the HAR problem. However, the conventional approaches were limited their performance in the process of feature extraction. With the limitation, deep learning approaches have been recently presented with outstanding performances. This paper proposed a comparative analysis of deep learning techniques for HAR using sensor data captured from accelerometer and gyroscope. In the proposed study, we employ various types of Convolutional Neural Networks (CNNs) with Long Short-Term Memory Networks (LSTMs), disregarding the needs for hand-crafted feature extraction. The comparative results indicate that the hybrid LSTM outperforms baseline models.

Proceedings ArticleDOI
03 Mar 2021
TL;DR: In this article, the authors proposed to conduct several experiments in order to discover the optimal placement for each physical activity, and found that by using heterogeneous sensor data fusion, the chest position is ideal for physical activity identification with a maximum accuracy of 94.18%.
Abstract: There are various possibilities for wearable sensors to determine the recognition of human condition. In order to theoretically drive with wearable sensor technology, several difficult issues have been studied and solved. We currently succeeded in computing the efficient high performance deep learning model for the sensor-based human activity recognition (HAR). There is, however, an influence study challenge regarding the impact of the body-mounted sensor position on the Long Short-Term Memory (LSTM) network. In this work, we propose to conduct several experiments in order to discover the optimal placement for each physical activity. The experimental results show that by using heterogeneous sensor data fusion, the chest position is ideal for physical activity identification with a maximum accuracy of 94.18%.

Proceedings ArticleDOI
26 Aug 2021
TL;DR: SE-DeepConvNet as mentioned in this paper is a lightweight deep convolutional neural network with squeeze-and-excitation modules for recognizing human activity from smartphone sensor data, which was developed and assessed on the UCI-HAR dataset.
Abstract: Human activity recognition (HAR) relying on wearable sensors has become a new challenging area of research in pervasive and ubiquitous computing due to the rapid advancements in sensor technology. Since several deep learning (DL) networks have been introduced to handle the problem of feature extraction in machine learning, these techniques have recently garnered considerable attention. Nevertheless, most recent DL networks utilize sensory data by automatically extracting spatial properties without addressing cross-channel data at the same level. This information from each sensor channel was independently conveyed in a hierarchical method from shallow levels to deeper levels. This paper introduces SE-DeepConvNet, a lightweight deep convolutional neural network with squeeze-and-excitation modules for recognizing human activity from smartphone sensor data. The recommended SE-DeepConvNet was developed and assessed on the UCI-HAR dataset, a public benchmark HAR dataset. According to the results obtained, the SE-DeepConvNet outperforms other baseline DL networks with a maximum accuracy of 99.27%.

Proceedings ArticleDOI
20 Aug 2021
TL;DR: In this article, the authors used artificial neural networks and other approaches to create prediction systems for floods, and the system outperformed the competition in terms of prediction accuracy, which is the state-of-the-art.
Abstract: Floods are natural catastrophes that impact negatively on natural life, farming, business, and infrastructure each year. Different hydrological and climatic variables impact flooding. Several studies on flood catastrophe management and food forecasting systems have been performed. However, with the assistance of recent technology developments, it is now critical to transition from individual tracking and forecasting frameworks to knowledgeable flood forecasting processes that support stakeholders and floods that effect everybody equitably. The Internet of Things (IoT) is a solution that utilizes embedded device hardware with a wireless communication network to transmit data from sensors to a computing device for real-time analysis. Flood forecasting attention has turned ahead from mathematical or hydrological concepts toward the algorithmic methodologies. Flood data is non-linear and changeable in character. Artificial neural networks and other approaches are utilized to create prediction systems for floods. The flooding forecasting system proposed in this study uses IoT and artificial neural networks. With a dashboard display, the web-based tool is designed to monitor the possibility of local flooding. The system outperforms the competition in terms of prediction accuracy.

Proceedings ArticleDOI
03 Mar 2021
TL;DR: In this article, the authors focus on the software engineers' perspective regarding security in the stage of software design and the tools for the measurement of the metrics are employed for the evaluation of the software's security.
Abstract: During this period of high-speed internet, there are a number of serious challenges for software security protection of software design, especially throughout the life cycle of the process of software design, in which there are various risks involving information interaction. Significant information leakage can result from a lack of technical support and software security protection. One major problem with regard to creating software that includes security is the way that secure software is defined and the methods that are used for the measurement of security. The point of this research work is on the software engineers’ perspective regarding security in the stage of software design. The tools for the measurement of the metrics are employed for the evaluation of the software’s security. In this case study, a metric category of design are used, which are assumed to provide quantitative data about the software’s security.