scispace - formally typeset
Search or ask a question
Author

Beakcheol Jang

Bio: Beakcheol Jang is an academic researcher from Yonsei University. The author has contributed to research in topics: Computer science & Information privacy. The author has an hindex of 15, co-authored 55 publications receiving 726 citations. Previous affiliations of Beakcheol Jang include Sangmyung University & North Carolina State University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper thoroughly explains how Q-learning evolved by unraveling the mathematical complexities behind it as well its flow from reinforcement learning family of algorithms.
Abstract: Q-learning is arguably one of the most applied representative reinforcement learning approaches and one of the off-policy strategies. Since the emergence of Q-learning, many studies have described its uses in reinforcement learning and artificial intelligence problems. However, there is an information gap as to how these powerful algorithms can be leveraged and incorporated into general artificial intelligence workflow. Early Q-learning algorithms were unsatisfactory in several aspects and covered a narrow range of applications. It has also been observed that sometimes, this rather powerful algorithm learns unrealistically and overestimates the action values hence abating the overall performance. Recently with the general advances of machine learning, more variants of Q-learning like Deep Q-learning which combines basic Q learning with deep neural networks have been discovered and applied extensively. In this paper, we thoroughly explain how Q-learning evolved by unraveling the mathematical complexities behind it as well its flow from reinforcement learning family of algorithms. Improved variants are fully described, and we categorize Q-learning algorithms into single-agent and multi-agent approaches. Finally, we thoroughly investigate up-to-date research trends and key applications that leverage Q-learning algorithms.

195 citations

Journal ArticleDOI
TL;DR: An attention-based Bi-LSTM+CNN hybrid model that capitalize on the advantages of LSTM and CNN with an additional attention mechanism is proposed that produces more accurate classification results, as well as higher recall and F1 scores, than individual multi-layer perceptron (MLP), CNN or L STM models as the hybrid models.
Abstract: There is a need to extract meaningful information from big data, classify it into different categories, and predict end-user behavior or emotions. Large amounts of data are generated from various sources such as social media and websites. Text classification is a representative research topic in the field of natural-language processing that categorizes unstructured text data into meaningful categorical classes. The long short-term memory (LSTM) model and the convolutional neural network for sentence classification produce accurate results and have been recently used in various natural-language processing (NLP) tasks. Convolutional neural network (CNN) models use convolutional layers and maximum pooling or max-overtime pooling layers to extract higher-level features, while LSTM models can capture long-term dependencies between word sequences hence are better used for text classification. However, even with the hybrid approach that leverages the powers of these two deep-learning models, the number of features to remember for classification remains huge, hence hindering the training process. In this study, we propose an attention-based Bi-LSTM+CNN hybrid model that capitalize on the advantages of LSTM and CNN with an additional attention mechanism. We trained the model using the Internet Movie Database (IMDB) movie review data to evaluate the performance of the proposed model, and the test results showed that the proposed hybrid attention Bi-LSTM+CNN model produces more accurate classification results, as well as higher recall and F1 scores, than individual multi-layer perceptron (MLP), CNN or LSTM models as well as the hybrid models.

160 citations

Journal ArticleDOI
TL;DR: This paper reviews the indoor positioning technologies that do not require the construction of offline fingerprint maps, and categorizes them into simultaneous localization and mapping; inter/extrapolation; and crowdsourcing-based technologies, and describes their algorithms and characteristics, including advantages and disadvantages.
Abstract: Fingerprint-based wireless indoor positioning approaches are widely used for location-based services because wireless signals, such as Wi-Fi and Bluetooth, are currently pervasive in indoor spaces. The working principle of fingerprinting technology is to collect the fingerprints from an indoor environment, such as a room or a building, in advance, create a fingerprint map, and use this map to estimate the user’s current location. The fingerprinting technology is associated with a high level of accuracy and reliability. However, the fingerprint map must be entirely re-created, not only when the Wi-Fi access points are added, modified, or removed, but also when the interior features, such as walls or even furniture, are changed, owing to the nature of the wireless signals. Many researchers have realized the problems in the fingerprinting technology and are conducting studies to address them. In this paper, we review the indoor positioning technologies that do not require the construction of offline fingerprint maps. We categorize them into simultaneous localization and mapping; inter/extrapolation; and crowdsourcing-based technologies, and describe their algorithms and characteristics, including advantages and disadvantages. We compare them in terms of our own parameters: accuracy, calculation time, versatility, robustness, security, and participation. Finally, we present the future research direction of the indoor positioning techniques. We believe that this paper provides valuable information on recent indoor localization technologies without offline fingerprinting map construction.

126 citations

Journal ArticleDOI
22 Aug 2019-PLOS ONE
TL;DR: This paper explores the performance of word2vec Convolutional Neural Networks (CNNs) to classify news articles and tweets into related and unrelated ones and indicates that word2 Vec significantly improved the accuracy of the classification model.
Abstract: Big web data from sources including online news and Twitter are good resources for investigating deep learning. However, collected news articles and tweets almost certainly contain data unnecessary for learning, and this disturbs accurate learning. This paper explores the performance of word2vec Convolutional Neural Networks (CNNs) to classify news articles and tweets into related and unrelated ones. Using two word embedding algorithms of word2vec, Continuous Bag-of-Word (CBOW) and Skip-gram, we constructed CNN with the CBOW model and CNN with the Skip-gram model. We measured the classification accuracy of CNN with CBOW, CNN with Skip-gram, and CNN without word2vec models for real news articles and tweets. The experimental results indicated that word2vec significantly improved the accuracy of the classification model. The accuracy of the CBOW model was higher and more stable when compared to that of the Skip-gram model. The CBOW model exhibited better performance on news articles, and the Skip-gram model exhibited better performance on tweets. Specifically, CNN with word2vec models was more effective on news articles when compared to that on tweets because news articles are typically more uniform when compared to tweets.

105 citations

Journal ArticleDOI
TL;DR: An accurate new analytical saturation throughput model for the infrastructure case of IEEE 802.11 in the presence of hidden terminals is presented and simulation results show that the model is accurate in a wide variety of cases.
Abstract: Due to its usefulness and wide deployment, IEEE 802.11 has been the subject of numerous studies, but still lacks a complete analytical model. Hidden terminals are common in IEEE 802.11 and cause the degradation of throughput. Despite the importance of the hidden terminal problem, there have been a relatively small number of studies that consider the effect of hidden terminals on IEEE 802.11 throughput, and many are not accurate for a wide range of conditions. In this paper, we present an accurate new analytical saturation throughput model for the infrastructure case of IEEE 802.11 in the presence of hidden terminals. Simulation results show that our model is accurate in a wide variety of cases.

80 citations


Cited by
More filters
01 Jan 2012

3,692 citations

01 Jan 2013

1,098 citations

Journal ArticleDOI
Pei Huang1, Li Xiao1, Soroor Soltani1, Matt W. Mutka1, Ning Xi1 
TL;DR: This article surveys the latest progresses in WSN MAC protocol designs over the period 2002-2011 in four categories: asynchronous, synchronous, frame-slotted, and multichannel.
Abstract: Wireless Sensor Networks (WSNs) have become a leading solution in many important applications such as intrusion detection, target tracking, industrial automation, smart building and so on. Typically, a WSN consists of a large number of small, low-cost sensor nodes that are distributed in the target area for collecting data of interest. For a WSN to provide high throughput in an energy-efficient way, designing an efficient Medium Access Control (MAC) protocol is of paramount importance because the MAC layer coordinates nodes' access to the shared wireless medium. To show the evolution of WSN MAC protocols, this article surveys the latest progresses in WSN MAC protocol designs over the period 2002-2011. In the early development stages, designers were mostly concerned with energy efficiency because sensor nodes are usually limited in power supply. Recently, new protocols are being developed to provide multi-task support and efficient delivery of bursty traffic. Therefore, research attention has turned back to throughput and delay. This article details the evolution of WSN MAC protocols in four categories: asynchronous, synchronous, frame-slotted, and multichannel. These designs are evaluated in terms of energy efficiency, data delivery performance, and overhead needed to maintain a protocol's mechanisms. With extensive analysis of the protocols many future directions are stated at the end of this survey. The performance of different classes of protocols could be substantially improved in future designs by taking into consideration the recent advances in technologies and application demands.

570 citations

Proceedings Article
01 Jan 2007
TL;DR: In this paper, the Gaussian Process Latent Variable Model (GPLVM) is used to reconstruct a topological connectivity graph from a signal strength sequence, which can be used to perform efficient WiFi SLAM.
Abstract: WiFi localization, the task of determining the physical location of a mobile device from wireless signal strengths, has been shown to be an accurate method of indoor and outdoor localization and a powerful building block for location-aware applications. However, most localization techniques require a training set of signal strength readings labeled against a ground truth location map, which is prohibitive to collect and maintain as maps grow large. In this paper we propose a novel technique for solving the WiFi SLAM problem using the Gaussian Process Latent Variable Model (GPLVM) to determine the latent-space locations of unlabeled signal strength data. We show how GPLVM, in combination with an appropriate motion dynamics model, can be used to reconstruct a topological connectivity graph from a signal strength sequence which, in combination with the learned Gaussian Process signal strength model, can be used to perform efficient localization.

488 citations

Journal ArticleDOI
TL;DR: An overview of current and next-generation methods for federated, secure and privacy-preserving artificial intelligence with a focus on medical imaging applications, alongside potential attack vectors and future prospects in medical imaging and beyond are presented.
Abstract: The broad application of artificial intelligence techniques in medicine is currently hindered by limited dataset availability for algorithm training and validation, due to the absence of standardized electronic medical records, and strict legal and ethical requirements to protect patient privacy. In medical imaging, harmonized data exchange formats such as Digital Imaging and Communication in Medicine and electronic data storage are the standard, partially addressing the first issue, but the requirements for privacy preservation are equally strict. To prevent patient privacy compromise while promoting scientific research on large datasets that aims to improve patient care, the implementation of technical solutions to simultaneously address the demands for data protection and utilization is mandatory. Here we present an overview of current and next-generation methods for federated, secure and privacy-preserving artificial intelligence with a focus on medical imaging applications, alongside potential attack vectors and future prospects in medical imaging and beyond. Medical imaging data is often subject to privacy and intellectual property restrictions. AI techniques can help out by offering tools like federated learning to bridge the gap between personal data protection and data utilisation for research and clinical routine, but these tools need to be secure.

487 citations