scispace - formally typeset
Search or ask a question
Author

Huu-Quoc Nguyen

Bio: Huu-Quoc Nguyen is an academic researcher from Kyung Hee University. The author has contributed to research in topics: Cloud computing & Cloud testing. The author has an hindex of 4, co-authored 10 publications receiving 100 citations.

Papers
More filters
Proceedings ArticleDOI
07 Jul 2015
TL;DR: The design and implementation of a low-cost system monitoring based on Raspberry Pi, a single board computer which follows Motion Detection algorithm written in Python as a default programming environment, to significantly decrease storage usage and save investment costs are described.
Abstract: Nowadays, the Closed-Circuit Television (CCTV) surveillance system is being utilized in order to keep peace and provide security to people. There are several defects in the video surveillance system, such as: picture is indistinct, anomalies cannot be identified automatically, a lot of storage spaces are needed to save the surveillance information, and prices remain relatively high. This paper describes the design and implementation of a low-cost system monitoring based on Raspberry Pi, a single board computer which follows Motion Detection algorithm written in Python as a default programming environment. In addition, the system uses the motion detection algorithm to significantly decrease storage usage and save investment costs. The algorithm for motion detection is being implemented on Raspberry Pi, which enables live streaming camera along with detection of motion. The live video camera can be viewed from any web browser, even from mobile in real-time.

62 citations

Journal ArticleDOI
TL;DR: A new approach for a probabilistic key predistribution scheme that guarantees a higher probability of sharing keys between nodes that are within the signal range than that of other schemes, which is expected to minimize the key ring, reduce the communication overhead, and provide high connectivity.
Abstract: In wireless sensor networks (WSNs), key management is one of the crucial aspects of security. Although existing key management schemes are enough to solve most of the security constraints on WSNs, a random-key predistribution scheme has recently evolved as an efficient solution for sharing keys between sensor nodes. Studying the signal ranges of the sensor nodes might significantly improve the performance of the key-sharing mechanism, but this possibility remains unexploited and needs further attention. Hence, in this paper, we propose a new approach for a probabilistic key predistribution scheme that guarantees a higher probability of sharing keys between nodes that are within the signal range than that of other schemes. As a result, the proposed approach provides adequate security and is expected to minimize the key ring, reduce the communication overhead, and provide high connectivity. We also compare our present scheme with existing algorithms developed for the same purpose and observe that the proposed scheme performed better, even with different deployment errors.

16 citations

Journal ArticleDOI
TL;DR: This research focuses on a comprehensive analysis of Gaussian process performance issues, highlighting ways to drastically reduce the complexity of hyper-parameter learning and training phases, which could be applicable in predicting the CPU utilization in the demonstrated application.
Abstract: For the past ten years, Gaussian process has become increasingly popular for modeling numerous inferences and reasoning solutions due to the robustness and dynamic features. Particularly concerning regression and classification data, the combination of Gaussian process and Bayesian learning is considered to be one of the most appropriate supervised learning approaches in terms of accuracy and tractability. However, due to the high complexity in computation and data storage, Gaussian process performs poorly when processing large input dataset. Because of the limitation, this method is ill-equipped to deal with the large-scale system that requires reasonable precision and fast reaction rate. To improve the drawback, our research focuses on a comprehensive analysis of Gaussian process performance issues, highlighting ways to drastically reduce the complexity of hyper-parameter learning and training phases, which could be applicable in predicting the CPU utilization in the demonstrated application. In fact, the purpose of this application is to save the energy by distributively engaging the Gaussian process regression to monitor and predict the status of each computing node. Subsequently, a migration mechanism is applied to migrate the system-level processes between multi-core and turn off the idle one in order to reduce the power consumption while still maintaining the overall performance.

13 citations

Proceedings ArticleDOI
04 Jan 2016
TL;DR: This work considers the problem of similarity search over the large datasets in the distributed environment and proposes a new approach to using the Vp-Tree algorithm in the parallel environment to achieve good performance as well as meet the scalability and fault tolerance requirements for the system while data scale up.
Abstract: We consider the problem of similarity search over the large datasets in the distributed environment. The proposed framework employs the Vp-Tree algorithm that integrated on top of the MapReduce framework to achieve good performance as well as meet the scalability and fault tolerance requirements for the system while data scale up. Since VP-Tree algorithm was implemented initially for partition and searching data in the local disk access, we proposed a new approach to using it in the parallel environment. The key point of the Vp-Tree algorithm is that it distributed the similar data points into groups, thereby reducing number of data need to scan during the searching stage. Consequently, the response time of the entire system has been improved. Otherwise, we used an open source computer vision library OpenCV for detect the similarity among images in the dataset. We evaluate the performance of our proposed framework using a synthetic data to show the positive of our approach. The experiment shows that our proposed framework achieves 57% improvement in response time in comparison with running searching job in tradition Hadoop framework. We also compared our application running time on Docker container against VM-based environment. The result points out that deploy our system over Docker container provide higher performance than VM-based environment in term of response time.

11 citations

Proceedings ArticleDOI
04 Dec 2014
TL;DR: This paper develops the scheduling algorithm to schedule tasks while finding the optimal number of VMs needed so that application's execution time is minimized and develops the cost efficient algorithm, that finds the minimum cost which CSP has to pay for CHP.
Abstract: Real-time applications require the system, which has enough processing capacity to respond the results, complete all the tasks in time. Cloud computing represents an attractive and cost-efficient of server-based computing and application service provider models. Virtualization technology enables Cloud Service Providers (CSP) to dynamically allocate their resources based on the workload fluctuations from Cloud Consumers. Cloud Service Providers have to tradeoff using different types of virtual machines in order to satisfy the Quality of Service (QoS) about response time also achieve cost efficient when hiring Virtual Machine (VM) from Cloud Hosting Providers (CHP). In this paper, we propose the Cost efficient Real-time Applications Scheduling (CERAS) algorithm in Cloud Computing to solve aforementioned issue. We first develop the scheduling algorithm to schedule tasks while finding the optimal number of VMs needed so that application's execution time is minimized. Based on that optimal number, we develop the cost efficient algorithm, that finds the minimum cost which CSP has to pay for CHP. The experiment results show that how efficient the CERAS algorithm can guarantee application's deadline while achieving the optimal resources needed and cost, compares to other traditional approaches.

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new authentication and key agreement scheme using elliptic curve cryptography has the ability to resist a number of known attacks comprising those found in both Chang-Le's protocols.
Abstract: In recent years, user authentication has emerged as an interesting field of research in wireless sensor networks. Most recently, in 2016, Chang and Le presented a scheme to authenticate the users in wireless sensor network using a password and smart card. They proposed two protocols P1 and P2. P1 is based on exclusive or XOR and hash functions, while P2 deploys elliptic curve cryptography in addition to the two functions used in P1. Although their protocols are efficient, we point out that both P1 and P2 are vulnerable to session specific temporary information attack and offline password guessing attack, while P1 is also vulnerable to session key breach attack. In addition, we show that both the protocols P1 and P2 are inefficient in authentication and password change phases. To withstand these weaknesses found in their protocols, we aim to design a new authentication and key agreement scheme using elliptic curve cryptography. Rigorous formal security proofs using the broadly accepted, the random oracle models, and the Burrows-Abadi-Needham logic and verification using the well-known Automated Validation of Internet Security Protocols and Applications tool are preformed on our scheme. The analysis shows that our designed scheme has the ability to resist a number of known attacks comprising those found in both Chang-Le's protocols. Copyright © 2016 John Wiley & Sons, Ltd.

96 citations

Journal ArticleDOI
TL;DR: The proposed energy-efficient solution for orchestrating the resource in cloud computing based on the Gaussian process regression method can achieve a significant result in reducing the energy consumption as well as maintaining the system performance.

69 citations

Journal ArticleDOI
TL;DR: The experiment results indicate that comparing with the wavelet support vector machine, autoregressive integrated moving average, adaptive network-based fuzzy inference system and tuned support vector regression, the proposed algorithm is superior to the other four prediction algorithms in prediction accuracy and efficiency.
Abstract: In order to reduce the energy consumption in the cloud data center, it is necessary to make reasonable scheduling of resources in the cloud. The accurate prediction for cloud computing load can be very helpful for resource scheduling to minimize the energy consumption. In this paper, a cloud load prediction model based on weighted wavelet support vector machine(WWSVM) is proposed to predict the host load sequence in the cloud data center. The model combines the wavelet transform and support vector machine to combine the advantages of them, and assigns weight to the sample, which reflects the importance of different sample points and improves the accuracy of load prediction. In order to find the optimal combination of the parameters, we proposed a parameter optimization algorithm based on particle swarm optimization(PSO). Finally, based on the WWSVM model, a load prediction algorithm is proposed for cloud computing using PSO-based weighted support vector machine. The Google cloud computing data set is used to verify the algorithm proposed in this paper by experiments. The experiment results indicate that comparing with the wavelet support vector machine, autoregressive integrated moving average, adaptive network-based fuzzy inference system and tuned support vector regression, the proposed algorithm is superior to the other four prediction algorithms in prediction accuracy and efficiency.

50 citations

Proceedings ArticleDOI
18 Apr 2017
TL;DR: This paper provides a general formulation of the Elastic provisioning of Virtual machines for Container Deployment (for short, EVCD) as an Integer Linear Programming problem, which takes explicitly into account the heterogeneity of container requirements and virtual machine resources.
Abstract: Docker containers enable to package an application together with all its dependencies and easily run it in any environment. Thanks to their ease of use and portability, containers are gaining an increasing interest and promise to change the way how Cloud platforms are designed and managed. For their execution in the Cloud, we need to solve the container deployment problem, which deals with the identification of an elastic set of computing machines that can host and execute those containers, while considering the diversity of their requirements.In this paper, we provide a general formulation of the Elastic provisioning of Virtual machines for Container Deployment (for short, EVCD) as an Integer Linear Programming problem, which takes explicitly into account the heterogeneity of container requirements and virtual machine resources. Besides optimizing multiple QoS metrics, EVCD can reallocate containers at runtime, when a QoS improvement can be achieved. Using the proposed formulation as benchmark, we evaluate two well-known heuristics, i.e., greedy first-fit and round-robin, that are usually adopted for solving the container deployment problem.

50 citations

Proceedings ArticleDOI
18 Apr 2017
TL;DR: The tools available to measure the performance of Docker from the perspective of the host operating system and of the virtualization environment are explored, and a characterization of the CPU and disk I/O overhead introduced by containers is provided.
Abstract: Today, a new technology is going to change the way platforms for the internet of services are designed and managed. This technology is called container (e.g. Docker and LXC). The internet of service industry is adopting the container technology both for internal usage and as commercial offering. The use of container as base technology for large-scale systems opens many challenges in the area of resource management at run-time, for example: autoscaling, optimal deployment and monitoring. Specifically, monitoring of container based systems is at the ground of any resource management solution, and it is the focus of this work. This paper explores the tools available to measure the performance of Docker from the perspective of the host operating system and of the virtualization environment, and it provides a characterization of the CPU and disk I/O overhead introduced by containers.

49 citations