scispace - formally typeset
Search or ask a question

Showing papers by "Qiang Duan published in 2019"


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a method for predicting malicious behavior of SIoT accounts based on threat intelligence, which uses support vector machine (SVM) to obtain threat intelligence related to malicious behaviour of target accounts, analyze contextual data in threat intelligence to predict the behavior of malicious accounts.
Abstract: The Social Internet of Things (SIoT) is a combination of the Internet of Things (IoT) and social networks, which enables better service discovery and improves the user experience. The threat posed by the malicious behavior of social network accounts also affects the SIoT, this paper studies the analysis and prediction of malicious behavior for SIoT accounts, proposed a method for predicting malicious behavior of SIoT accounts based on threat intelligence. The method uses support vector machine (SVM) to obtain threat intelligence related to malicious behavior of target accounts, analyze contextual data in threat intelligence to predict the behavior of malicious accounts. By collecting and analyzing the data in a SIoT environment, verifies the malicious behavior prediction method of SIoT account proposed in this paper.

6 citations


Proceedings ArticleDOI
08 Jul 2019
TL;DR: The notion of network-cloud/fog service unification is introduced and an architectural framework for unified network- cloud/f fog service provisioning is proposed and a survey is presented that reflects the state of the art of research on enabling network- Cloud/f Fog service unification.
Abstract: The recent developments in networking research leverage the principles of virtualization and service-orientation to enable fundamental changes in network architecture, which forms a trend of network cloudification that enables network systems to be realized using cloud technologies and network services to be provisioned following the cloud service model. On the other hand, the latest progress in cloud and fog computing has made networking an indispensable ingredient for cloud/fog service delivery. Convergence of networking and cloud/fog computing technologies enables unification of network and cloud/fog service provisioning, which has become an active research area that attracts interest from both academia and industry. In this paper, we introduce the notion of network-cloud/fog service unification and propose an architectural framework for unified network-cloud/fog service provisioning. Then we present a survey that reflects the state of the art of research on enabling network-cloud/fog service unification. We also discuss challenges to realizing such unification and identify some opportunities for future research, with a hope to arouse the research community's interest in this exciting interdisciplinary field.

5 citations


Proceedings ArticleDOI
01 Nov 2019
TL;DR: This paper designs a heuristic algorithm to maximize the number of successfully scheduled requests and minimize thenumber of preempted advance reservation requests, while minimizing the completion time of each request.
Abstract: As scientific applications and business services increasingly migrate to clouds, big data of various types with different priorities need to be transferred between geographically distributed cloud-based data centers. It has become a critical task for Cloud Service Providers (CSP) to fully utilize the expensive bandwidth resources of the links connecting such data centers while guaranteeing users’ Quality of Experience (QoE). Most high-performance networks based on software-defined networking (SDN) provide the capability of advance bandwidth reservation. This paper focuses on the scheduling of multiple user requests of two different types with different priorities, namely, advance bandwidth reservation with a lower priority and immediate bandwidth reservation with a higher priority, to maximize the total satisfaction of user requests.We formulate this co-scheduling problem as a generic optimization problem, which is shown to be NP-complete. We design a heuristic algorithm to maximize the number of successfully scheduled requests and minimize the number of preempted advance reservation requests, while minimizing the completion time of each request. Extensive simulation results show that our scheduling scheme significantly outperforms greedy approaches in terms of user satisfaction degree.

3 citations


Book ChapterDOI
27 Jun 2019
TL;DR: How today’s “On-Premises” server–clients conduct performance analysis before switching to the Cloud computing vendors is discussed, the differences between the Cloud Computing and MEC paradigms are discussed and the challenges and opportunities are envisioned for the performance tests on the future MEC service platforms.
Abstract: Mobile Edge Computing (MEC) is a fast growing research area that may soon offer alternate service platforms to the clients over today’s Cloud Computing vendors (e.g., Amazon AWS, Google Cloud, or Microsoft Azure, etc.). And when the MEC services are ready and available, just like when Cloud Computing services were presented to the “On-Premises” server clients, despite available research data on compatibility, business owners and engineers will still have to practically implement and host their applications to the MEC servers, measure performance and be convinced first that MEC services can withstand the real users load without any performance degradation or disruption before finally making the switch. In this paper, we first discuss how today’s “On-Premises” server–clients conduct performance analysis before switching to the Cloud computing vendors, discuss the differences between the Cloud computing and MEC paradigms and then we try to envision the challenges and opportunities that may unfold for the performance tests on the future MEC service platforms.

2 citations


Proceedings ArticleDOI
01 May 2019
TL;DR: A Monte Carlo localization algorithm based on Newton interpolation and differential evolution algorithm is proposed to solve the problem that the localization technology in the existing Internet of Things environment has insufficient localization accuracy and low localization efficiency.
Abstract: The accuracy of node localization is the key to the location service applications in the Internet of Things. In this paper, a Monte Carlo localization algorithm based on Newton interpolation and differential evolution algorithm is proposed to solve the problem that the localization technology in the existing Internet of Things environment has insufficient localization accuracy and low localization efficiency. The algorithm uses Newton interpolation to reduce the sampling region, and improves the sampling success rate through the differential crossing step of the differential evolution algorithm. The simulation results show that the localization accuracy and localization efficiency of the algorithm are greatly improved compared with existing algorithms.

1 citations