scispace - formally typeset
Search or ask a question

Showing papers by "Qiang He published in 2020"


Journal ArticleDOI
TL;DR: This work proposes EUAGame, a game-theoretic approach that formulates the EUA problem as a potential game and designs a novel decentralized algorithm for finding a Nash equilibrium in the game as a solution to theEUA problem.
Abstract: Edge Computing provides mobile and Internet-of-Things (IoT) app vendors with a new distributed computing paradigm which allows an app vendor to deploy its app at hired edge servers distributed near app users at the edge of the cloud. This way, app users can be allocated to hired edge servers nearby to minimize network latency and energy consumption. A cost-effective edge user allocation (EUA) requires maximum app users to be served with minimum overall system cost. Finding a centralized optimal solution to this EUA problem is NP-hard. Thus, we propose EUAGame, a game-theoretic approach that formulates the EUA problem as a potential game. We analyze the game and show that it admits a Nash equilibrium. Then, we design a novel decentralized algorithm for finding a Nash equilibrium in the game as a solution to the EUA problem. The performance of this algorithm is theoretically analyzed and experimentally evaluated. The results show that the EUA problem can be solved effectively and efficiently.

244 citations


Journal ArticleDOI
TL;DR: In this article, waste glass powder and class C fly ash (FC) were mixed at varying ratios (100:0, 75:25, 50:50, 25:75, 0:100) and activated by sodium hydroxide solutions of different concentrations.

213 citations


Journal ArticleDOI
TL;DR: This work proposes a posterior-neighborhood-regularized LF (PLF) model for QoS prediction, and experimental results from large scale QoS datasets demonstrate that PLF outperforms state-of-the-art models in terms of both accuracy and efficiency.
Abstract: Neighborhood regularization is highly important for a latent factor (LF)-based Quality-of-Service (QoS)-predictor since similar users usually experience similar QoS when invoking similar services. Current neighborhood-regularized LF models rely prior information on neighborhood obtained from common raw QoS data or geographical information. The former suffers from low prediction accuracy due to the difficulty of constructing the neighborhood based on incomplete QoS data, while the latter requires additional geographical information that is usually difficult to collect considering information security, identity privacy, and commercial interests in real-world scenarios. To address the above issues, this work proposes a posterior-neighborhood-regularized LF (PLF) model for QoS prediction. The main idea is to decompose the LF analysis process into three phases: a) primal LF extraction, where the LFs are extracted to represent involved users/services based on known QoS data, b) posterior-neighborhood construction, where the neighborhood of each user/service is achieved based on similarities between their primal LF vectors, and c) posterior-neighborhood-regularized LF analysis, where the objective function is regularized by both the posterior-neighborhood of users/services and L2-norm of desired LFs. Experimental results from large scale QoS datasets demonstrate that PLF outperforms state-of-the-art models in terms of both accuracy and efficiency.

107 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed DASGD algorithm outperforms state-of-the-art distributed SGD solvers for recommender systems in terms of prediction accuracy as well as scalability, making it highly useful for training LFA-based recommenders on large scale HiDS matrices with the help of cloud computing facilities.
Abstract: Latent factor analysis (LFA) via stochastic gradient descent (SGD) is highly efficient in discovering user and item patterns from high-dimensional and sparse (HiDS) matrices from recommender systems. However, most LFA-based recommender systems adopt a standard SGD algorithm, which suffers limited scalability when addressing big data. On the other hand, most existing parallel SGD solvers are either under the memory-sharing framework designed for a bare machine or suffering high communicational costs, which also greatly limits their applications in large-scale systems. To address the above issues, this paper proposes a distributed alternative stochastic gradient descent (DASGD) solver for an LFA-based recommender. Its training-dependences among latent features are decoupled via alternatively fixing one-half of the features to learn the other half following the principle of SGD but in parallel. It's distribution mechanism consists of efficient data partition, allocation and task parallelization strategies, which greatly reduces its communicational cost for high scalability. Experimental results on three large-scale HiDS matrices generated by real-world applications demonstrate that the proposed DASGD algorithm outperforms state-of-the-art distributed SGD solvers for recommender systems in terms of prediction accuracy as well as scalability. Hence, it is highly useful for LFA on HiDS matrices with the help of cloud computing facilities.

85 citations


Journal ArticleDOI
TL;DR: WAR (Web APIs Recommendation), the first data-driven approach for web APIs recommendation that integrates web API discovery, verification and selection operations based on keywords search over the web API correlation graph, is proposed.
Abstract: The ever-increasing popularity of web APIs allows app developers to leverage a set of existing APIs to achieve their sophisticated objectives. The heavily fragmented distribution of web APIs makes it challenging for an app developer to find appropriate and compatible web APIs. Currently, app developers usually have to manually discover candidate web APIs, verify their compatibility and select appropriate and compatible ones. This process is cumbersome and requires detailed knowledge of web APIs which is often too demanding. It has become a major obstacle to further and broader applications of web APIs. To address this issue, we first propose a web API correlation graph built on extensive data about the compatibility between web APIs. Then, we propose WAR (Web APIs Recommendation), the first data-driven approach for web APIs recommendation that integrates API discovery, verification and selection operations based on keywords search over the web API correlation graph. WAR assists app developers without detailed knowledge of web APIs in searching for appropriate and compatible APIs by typing a few keywords that represent the tasks required to achieve app developers’ objectives. We conducted large-scale experiments on 18,478 real-world APIs and 6,146 real-world apps to demonstrate the usefulness and efficiency of WAR.

68 citations


Journal ArticleDOI
12 Nov 2020
TL;DR: The divergence in the diversity and function of root-associated microbial communities along a continuous fine-scale niche is revealed, thereby highlighting a strictly selective role of soil-root interfaces in shaping the fungal community structure in the mangrove root systems.
Abstract: Mangrove roots harbor a repertoire of microbial taxa that contribute to important ecological functions in mangrove ecosystems. However, the diversity, function, and assembly of mangrove root-associated microbial communities along a continuous fine-scale niche remain elusive. Here, we applied amplicon and metagenome sequencing to investigate the bacterial and fungal communities among four compartments (nonrhizosphere, rhizosphere, episphere, and endosphere) of mangrove roots. We found different distribution patterns for both bacterial and fungal communities in all four root compartments, which could be largely due to niche differentiation along the root compartments and exudation effects of mangrove roots. The functional pattern for bacterial and fungal communities was also divergent within the compartments. The endosphere harbored more genes involved in carbohydrate metabolism, lipid transport, and methane production, and fewer genes were found to be involved in sulfur reduction compared to other compartments. The dynamics of root-associated microbial communities revealed that 56-74% of endosphere bacterial taxa were derived from nonrhizosphere, whereas no fungal OTUs of nonrhizosphere were detected in the endosphere. This indicates that roots may play a more strictly selective role in the assembly of the fungal community compared to the endosphere bacterial community, which is consistent with the projections established in an amplification-selection model. This study reveals the divergence in the diversity and function of root-associated microbial communities along a continuous fine-scale niche, thereby highlighting a strictly selective role of soil-root interfaces in shaping the fungal community structure in the mangrove root systems.

54 citations


Journal ArticleDOI
TL;DR: In this paper, an integer programming-based optimal approach for the small-scale kESP-CR problems and an approximation approach for large-scale K-edge server placement is proposed.
Abstract: Edge Cloud Computing (ECC) provides a new paradigm for app vendors to serve their users with low latency by deploying their services on edge servers in close proximity to mobile users. From the edge infrastructure provider's perspective, a cost-effective k-edge server placement aims to place k-edge servers within a particular geographic area to maximize the number of covered mobile users (i.e., user coverage). However, in the distributed and volatile ECC environment, edge servers are subject to failures due to various reasons, e.g., software exceptions, hardware faults, cyberattacks, etc. Users connected to a failed edge server have to access services in the remote cloud if they are not covered by any other edge servers. This significantly impacts users quality of experience. Thus, the robustness of the edge server network (i.e., network robustness) must be considered. In this paper, we formally model this joint user coverage and network robustness oriented k-edge server placement (kESP-CR) problem, and prove that finding the optimal solution to this problem is NP-hard. To tackle this kESP-CR problem, we propose an integer programming-based optimal approach for the small-scale kESP-CR problems and approximation approach for the large-scale kESP-CR problem. Finally, extensive experiments are conducted to evaluate their performance.

46 citations


Journal ArticleDOI
TL;DR: A multi-criterion decision-making (MCDM) based classifier fusion (MCF) strategy to combine different classifiers within an MCDM framework is proposed and a hierarchical predictive scheme (H- MCF) is investigated to reliably link the multi-modality features and multi-classifiers.

44 citations


Journal ArticleDOI
TL;DR: This study innovatively proposes a multilayered-and-randomized latent factor (MLF) model, adopting randomized-learning to train LFs for implementing a ‘one-iteration’ training process for saving time and adopting the principle of a generally multilayer structure as in a deep forest or multilayed extreme learning machine to structure its LFs, thereby enhancing its representative learning ability.
Abstract: How to extract useful knowledge from a high-dimensional and sparse (HiDS) matrix efficiently is critical for many big data-related applications. A latent factor (LF) model has been widely adopted to address this problem. It commonly relies on an iterative learning algorithm like stochastic gradient descent. However, an algorithm of this kind commonly consumes many iterations to converge, resulting in considerable time cost on large-scale datasets. How to accelerate an LF model's training process without accuracy loss becomes a vital issue. To address it, this study innovatively proposes a multilayered-and-randomized latent factor (MLF) model. Its main idea is two-fold: a) adopting randomized-learning to train LFs for implementing a ‘one-iteration’ training process for saving time; and 2) adopting the principle of a generally multilayered structure as in a deep forest or multilayered extreme learning machine to structure its LFs, thereby enhancing its representative learning ability. Empirical studies on six HiDS matrices from real applications demonstrate that compared with state-of-the-art LF models, an MLF model achieves significantly higher computational efficiency with satisfactory prediction accuracy. It has the potential to handle LF analysis on a large scale HiDS matrix with real-time requirements.

43 citations


Journal ArticleDOI
TL;DR: Heat treatment at 500 °C considerably enhanced the capacity, rate and stability of Cd(II) sorption by red mud, suggesting red mud could be optimized by heat treatment as a more effective sorbent for Cd (II) removal.

38 citations


Journal ArticleDOI
TL;DR: In this article, the changes in CH4 emissions, sediment properties, methanogenic and methanotrophic communities between two distinct mangrove habitats: one dominated by Kandelia obovata (KO, native species) and the other dominated by Sonneratia apetala (SA, introduced species).
Abstract: Mangrove ecosystems are important methane (CH4) sources driven by microbial activities. Mangrove reforestation has been practiced as a strategy to restore the ecological functions of coastal environments. However, it remains unclear how introduced mangrove species impact their sediment microbial communities and CH4 emissions. Here we compared the changes in CH4 emissions, sediment properties, methanogenic and methanotrophic communities between two distinct mangrove habitats: one dominated by Kandelia obovata (KO, native species) and the other dominated by Sonneratia apetala (SA, introduced species). Compared with the KO sediment, the SA sediment had significantly (P

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper employed Matrix Factorization (MF) approaches to make the predictions based on a total of 31,432 Android apps from Google Play and employed an adaptive weighting mechanism to neutralize the bias caused by the popularity of third-party libraries.
Abstract: The rapid growth of mobile apps has significantly promoted the use of third-party libraries in mobile app development. However, mobile app developers are now facing the challenge of finding useful third-party libraries for improving their apps, e.g., to enhance user interfaces, to add social features, etc. An effective approach is to leverage collaborative filtering (CF) to predict useful third-party libraries for developers. We employed Matrix Factorization (MF) approaches - the classic CF-based prediction approaches - to make the predictions based on a total of 31,432 Android apps from Google Play. However, our investigation shows that there is a significant lack of diversity in the prediction results - a small fraction of popular third-party libraries dominate the prediction results while most other libraries are ill-served. The low diversity in the prediction results limits the usefulness of the prediction because it lacks novelty and serendipity which are much appreciated by mobile app developers. In order to increase the diversity in the prediction results, we designed an innovative MF-based approach, namely LibSeek, specifically for predicting useful third-party libraries for mobile apps. It employs an adaptive weighting mechanism to neutralize the bias caused by the popularity of third-party libraries. In addition, it introduces neighborhood information, i.e., information about similar apps and similar third-party libraries, to personalize the predictions for individual apps. The experimental results show that LibSeek can significantly diversify the prediction results, and in the meantime, increase the prediction accuracy.

Journal ArticleDOI
TL;DR: A novel distributed bipartite compensator with intermittent communication mechanism is proposed that is able to achieve intermittent communication between neighbors, is capable of estimating the leader’s states within finite time, and is applicable for the directed signed communication topology.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the cost-effectiveness of user allocation solutions with two optimization objectives, i.e., maximizing users' overall quality of experience, minimizing system costs, and so on.
Abstract: Edge computing is a new distributed computing paradigm extending the cloud computing paradigm, offering much lower end-to-end latency, as real-time, latency-sensitive applications can now be deployed on edge servers that are much closer to end-users than distant cloud servers. In edge computing, edge user allocation (EUA) is a critical problem for any app vendors, who need to determine which edge servers will serve which users. This is to satisfy application-specific optimization objectives, e.g., maximizing users' overall quality of experience, minimizing system costs, and so on. In this paper, we focus on the cost-effectiveness of user allocation solutions with two optimization objectives. The primary one is to maximize the number of users allocated to edge servers. The secondary one is to minimize the number of required edge servers, which subsequently reduces the operating costs for app vendors. We first model this problem as a bin packing problem and introduce an approach for finding optimal solutions. However, finding optimal solutions to the NP-hard EUA problem in large-scale scenarios is intractable. Thus, we propose a heuristic to efficiently find sub-optimal solutions to large-scale EUA problems. Extensive experiments conducted on real-world data demonstrate that our heuristic can solve the EUA problem effectively and efficiently, outperforming the state-of-the-art and baseline approaches.

Journal ArticleDOI
Da Hu1, Hai Zhong2, Shuai Li1, Jindong Tan1, Qiang He1 
TL;DR: A novel framework is proposed to enable robotic disinfection in built environments to reduce pathogen transmission and exposure, and a deep-learning method is developed to segment and map areas of potential contamination in three dimensions based on the object affordance concept.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: Wang et al. as discussed by the authors proposed an optimal approach named EAD-opt$ to find the optimal solution of EAD based on integer programming, and an approximation approach named eAD-apx$ to finding approximate solutions in large-scale EAD scenarios efficiently.
Abstract: Mobile edge computing has emerged as a new distributed computing paradigm that overcomes the limitations of traditional cloud computing. In an edge computing environment, an app vendor can hire computing and storage resources on edge servers for deploying their applications to deliver lower-latency services to their app users. Under a budget constraint, an optimal edge application deployment strategy allows an app vendor to deploy application instances on edge servers in a specific area and provide services to the most app users in the area. In this paper, we make the first attempt to tackle this edge application deployment (EAD) problem. Specifically, we formulate the EAD problem as a constrained optimization problem and prove its $\mathcal{NP}$ -hardness. Then, we propose an optimal approach named EAD- $opt$ to find the optimal solution of EAD based on integer programming, and an approximation approach named EAD- $apx$ to find approximate solutions in large-scale EAD scenarios efficiently. We evaluate our approaches by conducting experiments on a widely used real-world data set and a synthetic data set with comparison against two baseline approaches. The experimental results demonstrate that our approaches can solve the EAD problem effectively and efficiently.

Journal ArticleDOI
TL;DR: Transfer learning facilitates mass classification for both DBT and FFDM, and DBT outperforms FFDM when equipped with transfer learning, and the DBT-based DCNN outperforms the FFDM-basedDCNN when equippedwith transfer learning.
Abstract: To evaluate the impact of utilizing digital breast tomosynthesis (DBT) or/and full-field digital mammography (FFDM), and different transfer learning strategies on deep convolutional neural network (DCNN)-based mass classification for breast cancer. We retrospectively collected 441 patients with both DBT and FFDM on which regions of interest (ROIs) covering the malignant, benign and normal tissues were extracted for DCNN training and validation. Experiments were conducted for tasks in distinguishing malignant/benign/normal: (1) classification capabilities of DBT vs FFDM and the role of transfer learning were validated on 2D-DCNN; (2) different strategies of combining DBT and FFDM and the associated impacts on classification were explored; (3) 2D-DCNN and 3D-DCNN trained from scratch with volumetric DBT were compared. 2D-DCNN with transfer learning outperformed that without for DBT in distinguishing malignant (ΔAUC = 0.059 ± 0.009, p < 0.001), benign (ΔAUC = 0.095 ± 0.010, p < 0.001) and normal tissue (ΔAUC = 0.042 ± 0.004, p < 0.001) (paired samples t test). 2D-DCNN trained on DBT (with transfer learning) achieved higher accuracy than those on FFDM (malignant: ΔAUC = 0.014 ± 0.014, p = 0.037; benign: ΔAUC = 0.031 ± 0.006, p < 0.001; normal: ΔAUC = 0.017 ± 0.004, p < 0.001) (independent samples t test). The 2D-DCNN employing both DBT and FFDM for training achieved better performances in benign (FFDM: ΔAUC = 0.010 ± 0.008, p < 0.001; DBT: ΔAUC = 0.009 ± 0.005, p < 0.001) and normal (FFDM: ΔAUC = 0.005 ± 0.003, p < 0.001; DBT: ΔAUC = 0.002 ± 0.002, p < 0.001) (related samples Friedman test). The 3D-DCNN and 2D-DCNN trained from scratch with DBT only produced moderate classification. Transfer learning facilitates mass classification for both DBT and FFDM, and DBT outperforms FFDM when equipped with transfer learning. Integrating DBT and FFDM in DCNN training enhances mass classification accuracy for breast cancer. • Transfer learning facilitates mass classification for both DBT and FFDM, and the DBT-based DCNN outperforms the FFDM-based DCNN when equipped with transfer learning. • Integrating DBT and FFDM in DCNN training enhances breast mass classification accuracy. • 3D-DCNN/2D-DCNN trained from scratch with volumetric DBT but without transfer learning only produce moderate mass classification result.

Journal ArticleDOI
TL;DR: Two service recommendation approaches for multi-tenant SBSs are presented, one for build-time and one for runtime, based on K-Means clustering and Locality-Sensitive Hashing techniques respectively, aiming at finding appropriate services efficiently.
Abstract: The popularity of cloud computing has fueled the growth in multi-tenant service-based systems (SBSs) that are composed of selected cloud services. In the cloud environment, a multi-tenant SBS simultaneously serves multiple tenants that usually have differentiated QoS requirements. This unique characteristic further complicates the problems of QoS-aware service selection at build-time and system adaptation at runtime, and renders conventional approaches obsolete and inefficient. In the dynamic and volatile cloud environment, the efficiency of building and adapting a multi-tenant SBS is of paramount importance. In this paper, we present two service recommendation approaches for multi-tenant SBSs, one for build-time and one for runtime, based on K-Means clustering and Locality-Sensitive Hashing (LSH) techniques respectively, aiming at finding appropriate services efficiently. Extensive experimental results demonstrate that our approaches can facilitate fast multi-tenant SBS construction and rapid system adaptation.

Journal ArticleDOI
TL;DR: It is proven that the proposed controllers fulfill the exclusion of Zeno behavior in two consensus problems and are capable of addressing the case of input delay, and is applicable for the signed communication topology.
Abstract: This study focuses on the distributed bipartite consensus tracking for linear multi-agent systems with input time delay based upon event-triggered transmission mechanism. Both cooperative interaction and antagonistic interaction between neighbor agents are considered. A novel distributed bipartite control technique with event-triggered mechanism is raised to address this consensus issue. Different from the existing methods, our control technique does not need continuous communication among agents, is capable of addressing the case of input delay, and is applicable for the signed communication topology. Moreover, to avoid continuous monitoring of one's own state, a self-triggered control strategy is further proposed. And when the system states cannot be measured, the observer-based bipartite control technique with event-triggered mechanism is thus put forward. Furthermore, the results in leader-following consensus are extended to containment control. It is proven that the proposed controllers fulfill the exclusion of Zeno behavior in two consensus problems. Finally, simulation experiments are used to test the practicability of the theoretical analysis.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: This paper formally model this Robustness-oriented k Edge Server Placement (RkESP) problem, and proves that finding the optimal solution to this problem is $\mathcal{N}$-hard", and proposes an integer programming based optimal approach, namely Opt, to find optimal solutions to small-scale RKESP problems.
Abstract: Mobile Edge Computing (MEC) is an emerging and prospective computing paradigm that supports low-latency content delivery. In a MEC environment, edge servers are attached to base stations or access points in closer proximity to end-users to reduce the end-to-end latency in their access to online content. From an edge infrastructure provider’s perspective, a cost-effective k edge server placement (kESP) places k edge servers within a particular geographic area to maximize their coverage. However, in the distributed MEC environment, edge servers are often subject to failures due to various reasons, e.g., software exceptions, hardware faults, cyberattacks, etc. End-users connected to a failed edge server have to access online content from the remote cloud if they are not covered by any other edge servers. This significantly jeopardizes end-users’ quality of experience. Thus, the robustness of an edge server network must be considered in edge server placement. In this paper, we formally model this Robustness-oriented k Edge Server Placement (RkESP) problem, and prove that finding the optimal solution to this problem is $\mathcal{N}\mathcal{P}$-hard. Thus, we firstly propose an integer programming based optimal approach, namely Opt, to find optimal solutions to small-scale RkESP problems. Then, we propose an approximate approach, namely Approx, for solving large-scale RkESP problems efficiently with an O(k)-approximation ratio. Finally, the performance of the two approaches is experimentally evaluated against five state-of-the-art approaches on a real-world dataset and a large-scale synthesized dataset.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: In this paper, an effective and efficient game-theoretic approach that admits a Nash equilibrium as a solution to the user allocation problem in edge computing has been proposed, which is able to fully utilize the distributed nature of edge computing.
Abstract: As many applications and services are moving towards a more human-centered design, app vendors are taking the quality of experience (QoE) increasingly seriously. End-to-end latency is a key factor that determines the QoE experienced by users, especially for latency-sensitive applications such as online gaming, health care, critical warning systems and so on. Recently, edge computing has emerged as a promising solution to the high latency problem. In an edge computing environment, edge servers are deployed at cellular base stations, offering processing power and low network latency to users within their geographic proximity. In this paper, we tackle the user allocation problem in edge computing from an app vendor's perspective, where the vendor needs to decide which edge servers to serve which users in a specific area. Also, the vendor must consider the various levels of quality of service (QoS) for its users. Each QoS level results in a different QoE level; thus, the app vendor needs to decide the QoS level for each user so that the overall user experience is maximized. To tackle the NP-hardness of this problem, we formulate it as a potential game then propose QoEGame, an effective and efficient game-theoretic approach that admits a Nash equilibrium as a solution to the user allocation problem. Being a distributed algorithm, QoEGame is able to fully utilize the distributed nature of edge computing. Finally, we theoretically and empirically evaluate the performance of QoEGame, which is illustrated to be significantly better than the state of the art and other baseline approaches.

Journal ArticleDOI
TL;DR: A Community-based Approach for the OMP (CAOM) is proposed, including: community detection, selection of candidate nodes and generation of seed nodes, and the significant communities are devised to reduce the computational complexity effectively and distribute seed nodes into the reasonable communities.

Journal ArticleDOI
TL;DR: This paper proposes ISUAGame, a game-theoretic approach that formulates the interference-aware SUA (ISUA) problem as a potential game, and designs a novel decentralized algorithm for finding a Nash equilibrium in the game as a solution to the ISUA problem.
Abstract: Edge Computing, extending cloud computing, has emerged as a prospective computing paradigm. It allows a SaaS (Software-as-a-Service) vendor to allocate its users to nearby edge servers to minimize network latency and energy consumption on their devices. From the SaaS vendor's perspective, a cost-effective SaaS user allocation (SUA) aims to allocate maximum SaaS users on minimum edge servers. However, the allocation of excessive SaaS users to an edge server may result in severe interference and consequently impact SaaS users data rates. In this paper, we formally model this problem and prove that finding the optimal solution to this problem is NP-hard. Thus, we propose ISUAGame, a game-theoretic approach that formulates the interference-aware SUA (ISUA) problem as a potential game. We analyze the game and show that it admits a Nash equilibrium. Then, we design a novel decentralized algorithm for finding a Nash equilibrium in the game as a solution to the ISUA problem. The performance of this algorithm is theoretically analyzed and experimentally evaluated. The results show that the ISUA problem can be solved effectively and efficiently.

Journal ArticleDOI
TL;DR: A novel multi-domain network service deployment framework is proposed by integrating SDN architecture and NFV technology, which can intelligently deploy virtual network functions (VNFs) into multi- domain networks.

Journal ArticleDOI
TL;DR: How the adoption of CC in Australia is related to technological factors, risk factors, and environmental factors is explored to provide useful insights that can be utilised practically by SMEs, policymakers, and cloud vendors.
Abstract: Cloud Computing (CC) is an emerging technology that can potentially revolutionise the application and delivery of IT. There has been little research, however, into the adoption of CC in Small and Medium-Sized Enterprises (SMEs). The indicators show that CC has been adopted very slowly. There is also a significant research gap in the investigation of the adoption of this innovation in SMEs. This article explores how the adoption of CC in Australia is related to technological factors, risk factors, and environmental factors. The study provides useful insights that can be utilised practically by SMEs, policymakers, and cloud vendors.

Journal ArticleDOI
TL;DR: MPDroid is proposed, an approach that combines static analysis and collaborative filtering to identify the minimum permissions for an Android app based on its app description and API usage and shows that MPDroid outperforms the state-of-the-art approach significantly.

Journal ArticleDOI
TL;DR: In this article, bauxite residue was evaluated for its potential in the removal of ciprofloxacin as a sustainable sorbent material for the first time, and it was shown that removal of the antibiotic by bauxites was positively correlated with the cation exchange capacity and specific surface area of bauxITE residue.

Journal ArticleDOI
TL;DR: A novel approach for service QoS prediction called NDMF, which integrates user neighborhood selected by a collaborative way into an enhanced matrix factorization model via deep neural network (DNN), which significantly outperforms state-of-the-art ones in terms of multiple evaluation metrics.
Abstract: Quality of service (QoS) has been mostly applied to represent non-functional properties of Web services and differentiate those with the same functionality. How to accurately predict service QoS has become a key research topic. Researchers have employed neighborhood information into matrix factorization (MF) for service QoS prediction in recent years. However, they are restricted to traditional matrix factorization that may incur a couple of limitations. 1) Conventional MF for QoS prediction linearly combines the multiplication of the latent feature representation of users and services through inner product, failing to fully capture the implicit features of user and service. 2) Most of approaches integrate user or service neighborhood as heuristics into MF model, where either location context or historical invocation records are used to calculate similar users or services. Nevertheless, combining both of them together in a collaborative way is ignored for neighborhood selection that has yet to be properly explored. To deal with the challenges, we propose a novel approach for service QoS prediction called ${N}$ eighborhood-integrated ${D}$ eep ${M}$ atrix ${F}$ actorization (NDMF), which integrates user neighborhood selected by a collaborative way into an enhanced matrix factorization model via deep neural network (DNN). We implement a prototype system and conduct extensive experiments on public and real-world large Web service dataset with almost 2,000,000 service invocations called WS-DREAM which is widely used in service QoS prediction. The experimental results demonstrate that our proposed approach significantly outperforms state-of-the-art ones in terms of multiple evaluation metrics.

Journal ArticleDOI
TL;DR: This work proposes a heuristic approach that is able to effectively and efficiently find sub-optimal solutions to the QoE-aware EUA problem, and conducts a series of experiments on a real-world dataset to evaluate the performance of the approach against several state-of-the-art and baseline approaches.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: In this article, the authors presented the budgeted edge data caching problem as a constrained optimization problem to maximize the overall reduction in data retrieval for all its app users within the budget, and proved that it is NP-hard.
Abstract: In mobile edge computing (MEC), edge servers are deployed at base stations to provide highly accessible computational resources and storage capacities to nearby mobile devices. Caching data on edge servers can ensure the service quality and network latency for those mobile devices. However, an app vendor needs to ensure that the data caching cost does not exceed its data caching budget. In this paper, we present the budgeted edge data caching (BEDC) problem as a constrained optimization problem to maximize the overall reduction in data retrieval for all its app users within the budget, and prove that it is NP-hard. Then, we provide an approach named IP-BEDC for solving the BEDC problem optimally based on Integer Programming. We also provide an O(k) -approximation algorithm, namely α-BEDC, to find near-optimal solutions to the BEDC problems efficiently. Our proposed approaches are evaluated on a real-world data set and a synthesized data set. The results demonstrate that our approaches can solve the BEDC problem effectively and efficiently while significantly outperforming five representative approaches.