scispace - formally typeset
Search or ask a question

Showing papers by "Qiang He published in 2021"


Journal ArticleDOI
TL;DR: An online algorithm, called CEDC-O, is proposed, developed based on Lyapunov optimization, works online without requiring future information, and achieves provable close-to-optimal performance.
Abstract: In the edge computing (EC) environment, edge servers are deployed at base stations to offer highly accessible computing and storage resources to nearby app users. From the app vendor's perspective, caching data on edge servers can ensure low latency in app users’ retrieval of app data. However, an edge server normally owns limited resources due to its limited size. In this article, we investigate the collaborative caching problem in the EC environment with the aim to minimize the system cost including data caching cost, data migration cost, and quality-of-service (QoS) penalty. We model this collaborative edge data caching problem (CEDC) as a constrained optimization problem and prove that it is $\mathcal {NP}$ NP -complete. We propose an online algorithm, called CEDC-O, to solve this CEDC problem during all time slots. CEDC-O is developed based on Lyapunov optimization, works online without requiring future information, and achieves provable close-to-optimal performance. CEDC-O is evaluated on a real-world data set, and the results demonstrate that it significantly outperforms four representative approaches.

130 citations


Journal ArticleDOI
TL;DR: A new deep CF model for service recommendation, named location-aware deep CF (LDCF), which can not only learn the high-dimensional and nonlinear interactions between users and services but also significantly alleviate the data sparsity problem.
Abstract: With the widespread application of service-oriented architecture (SOA), a flood of similarly functioning services have been deployed online. How to recommend services to users to meet their individual needs becomes the key issue in service recommendation. In recent years, methods based on collaborative filtering (CF) have been widely proposed for service recommendation. However, traditional CF typically exploits only low-dimensional and linear interactions between users and services and is challenged by the problem of data sparsity in the real world. To address these issues, inspired by deep learning, this article proposes a new deep CF model for service recommendation, named location-aware deep CF (LDCF). This model offers the following innovations: 1) the location features are mapped into high-dimensional dense embedding vectors; 2) the multilayer-perceptron (MLP) captures the high-dimensional and nonlinear characteristics; and 3) the similarity adaptive corrector (AC) is first embedded in the output layer to correct the predictive quality of service. Equipped with these, LDCF can not only learn the high-dimensional and nonlinear interactions between users and services but also significantly alleviate the data sparsity problem. Through substantial experiments conducted on a real-world Web service dataset, results indicate that LDCF’s recommendation performance obviously outperforms nine state-of-the-art service recommendation methods.

119 citations


Journal ArticleDOI
TL;DR: CNMF is proposed, a covering-based quality prediction method for Web services via neighborhood-aware matrix factorization that significantly outperforms eight existing quality prediction methods, including two state-of-the-art methods that also utilize neighborhood information with MF.
Abstract: The number of Web services on the Internet has been growing rapidly. This has made it increasingly difficult for users to find the right services from a large number of functionally equivalent candidate services. Inspecting every Web service for their quality value is impractical because it is very resource consuming. Therefore, the problem of quality prediction for Web services has attracted a lot of attention in the past several years, with a focus on the application of the Matrix Factorization (MF) technique. Recently, researchers have started to employ user similarity to improve MF-based prediction methods for Web services. However, none of the existing methods has properly and systematically addressed two of the major issues: 1) retrieving appropriate neighborhood information, i.e., similar users and services; 2) utilizing full neighborhood information, i.e., both users’ and services’ neighborhood information. In this paper, we propose CNMF, a c overing-based quality prediction method for Web services via n eighborhood-aware m atrix f actorization. The novelty of CNMF is twofold. First, it employs a covering-based clustering method to find similar users and services, which does not require the number of clusters and cluster centroids to be prespecified. Second, it utilizes neighborhood information on both users and services to improve the prediction accuracy. The results of experiments conducted on a real-world dataset containing 1,974,675 Web service invocation records demonstrate that CNMF significantly outperforms eight existing quality prediction methods, including two state-of-the-art methods that also utilize neighborhood information with MF.

114 citations


Journal ArticleDOI
TL;DR: The first attempt to formulate this Edge Data Distribution (EDD) problem as a constrained optimization problem from the app vendor's perspective and proposes an optimal approach named EDD-IP to solve this problem exactly with the Integer Programming technique.
Abstract: Edge computing, as an extension of cloud computing, distributes computing and storage resources from centralized cloud to distributed edge servers, to power a variety of applications demanding low latency, e.g., IoT services, virtual reality, real-time navigation, etc. From an app vendor's perspective, app data needs to be transferred from the cloud to specific edge servers in an area to serve the app users in the area. However, according to the pay-as-you-go business model, distributing a large amount of data from the cloud to edge servers can be expensive. The optimal data distribution strategy must minimize the cost incurred, which includes two major components, the cost of data transmission between the cloud to edge servers and the cost of data transmission between edge servers. In the meantime, the delay constraint must be fulfilled - the data distribution must not take too long. In this article, we make the first attempt to formulate this Edge Data Distribution (EDD) problem as a constrained optimization problem from the app vendor's perspective and prove its $\mathcal {NP}$ NP -hardness. We propose an optimal approach named EDD-IP to solve this problem exactly with the Integer Programming technique. Then, we propose an $O(k)$ O ( k ) -approximation algorithm named EDD-A for finding approximate solutions to large-scale EDD problems efficiently. EDD-IP and EDD-A are evaluated on a real-world dataset and the results demonstrate that they significantly outperform three representative approaches.

108 citations


Proceedings ArticleDOI
19 Apr 2021
TL;DR: CoopEdge as mentioned in this paper is a blockchain-based decentralized platform for cooperative edge computing, where an edge server can publish a computation task for other edge servers to contend for and a winner is selected from candidate edge servers based on their reputation.
Abstract: Edge computing (EC) has recently emerged as a novel computing paradigm that offers users low-latency services. Suffering from constrained computing resources due to their limited physical sizes, edge servers cannot always handle all the incoming computation tasks timely when they operate independently. They often need to cooperate through peer-offloading. Deployed and managed by different stakeholders, edge servers operate in a distrusted environment. Trust and incentive are the two main issues that challenge cooperative computing between them. Another unique challenge in the EC environment is to facilitate trust and incentive in a decentralized manner. To tackle these challenges systematically, this paper proposes CoopEdge, a novel blockchain-based decentralized platform, to drive and support cooperative edge computing. On CoopEdge, an edge server can publish a computation task for other edge servers to contend for. A winner is selected from candidate edge servers based on their reputations. After that, a consensus is reached among edge servers to record the performance in task execution on blockchain. We implement CoopEdge based on Hyperledger Sawtooth and evaluate it experimentally against a baseline and two state-of-the-art implementations in a simulated EC environment. The results validate the usefulness of CoopEdge and demonstrate its performance.

90 citations


Journal ArticleDOI
TL;DR: Given a service composition and a set of candidate services, Q2C first preprocesses the quality correlations among the candidate services and then constructs a quality correlation index graph to enable efficient queries for quality correlations.
Abstract: As enterprises around the globe embrace globalization, strategic alliances among enterprises have become an important means to gain competitive advantages. Enterprises cooperate to improve the quality or lower the prices of their services, which introduce quality correlations, i.e., the quality of a service is associated with other services. Existing approaches for service composition have not fully and systematically considered the quality correlations between services. In this paper, we propose a novel approach named Q2C ( Q uery of Q uality C orrelation) to systematically model quality correlations and enable efficient queries of quality correlations for service compositions. Given a service composition and a set of candidate services, Q2C first preprocesses the quality correlations among the candidate services and then constructs a quality correlation index graph to enable efficient queries for quality correlations. Extensive experiments are conducted on a real-world web service dataset to demonstrate the effectiveness and efficiency of Q2C.

86 citations


Journal ArticleDOI
TL;DR: This article proposes a lightweight sampling-based probabilistic approach, namely EDI-V, to help app vendors audit the integrity of their data cached on a large scale of edge servers, and proposes a new data structure named variable Merkle hash tree (VMHT) for generating the integrity proofs of those data replicas during the audit.
Abstract: Edge computing allows app vendors to deploy their applications and relevant data on distributed edge servers to serve nearby users. Caching data on edge servers can minimize users’ data retrieval latency. However, such cache data are subject to both intentional and accidental corruption in the highly distributed, dynamic, and volatile edge computing environment. Given a large number of edge servers and their limited computing resources, how to effectively and efficiently audit the integrity of app vendors’ cache data is a critical and challenging problem. This article makes the first attempt to tackle this Edge Data Integrity (EDI) problem. We first analyze the threat model and the audit objectives, then propose a lightweight sampling-based probabilistic approach, namely EDI-V, to help app vendors audit the integrity of their data cached on a large scale of edge servers. We propose a new data structure named variable Merkle hash tree (VMHT) for generating the integrity proofs of those data replicas during the audit. VMHT can ensure the audit accuracy of EDI-V by maintaining sampling uniformity. EDI-V allows app vendors to inspect their cache data and locate the corrupted ones efficiently and effectively. Both theoretical analysis and comprehensively experimental evaluation demonstrate the efficiency and effectiveness of EDI-V.

85 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed an Internet of Vehicles (IoV) application for smart cities, driving increasing service demands for processing various contents (e.g., videos).
Abstract: Internet of Vehicles (IoV) enables numerous in-vehicle applications for smart cities, driving increasing service demands for processing various contents (e.g., videos). Generally, for efficient ser...

80 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed an optimal approach named EDMOpti and a novel game-theoretical approach called EDMGame for mitigating edge DDoS attacks, which formulates the EDM problem as a potential EDM Game and employs a decentralized algorithm to find the Nash equilibrium as the solution.
Abstract: Edge computing (EC) is an emerging paradigm that extends cloud computing by pushing computing resources onto edge servers that are attached to base stations or access points at the edge of the cloud in close proximity with end-users Due to edge servers' geographic distribution, the EC paradigm is challenged by many new security threats, including the notorious distributed Denial-of-Service (DDoS) attack In the EC environment, edge servers usually have constrained processing capacities due to their limited sizes Thus, they are particularly vulnerable to DDoS attacks DDoS attacks in the EC environment render existing DDoS mitigation approaches obsolete with its new characteristics In this paper, we make the first attempt to tackle the edge DDoS mitigation (EDM) problem We model it as a constraint optimization problem and prove its NP-hardness To solve this problem, we propose an optimal approach named EDMOpti and a novel game-theoretical approach named EDMGame for mitigating edge DDoS attacks EDMGame formulates the EDM problem as a potential EDM Game that admits a Nash equilibrium and employs a decentralized algorithm to find the Nash equilibrium as the solution Through theoretical analysis and experimental evaluation, we demonstrate that our approaches can solve the EDM problem effectively and efficiently

63 citations


Journal ArticleDOI
TL;DR: In this article, a traffic flow prediction driven resource reservation method, called TripRes, is developed to address the challenge of how to accurately schedule and dynamically reserve proper numbers of resources for multimedia services in edge servers.
Abstract: The Internet of Vehicles (IoV) connects vehicles, roadside units (RSUs) and other intelligent objects, enabling data sharing among them, thereby improving the efficiency of urban traffic and safety. Currently, collections of multimedia content, generated by multimedia surveillance equipment, vehicles, and so on, are transmitted to edge servers for implementation, because edge computing is a formidable paradigm for accommodating multimedia services with low-latency resource provisioning. However, the uneven or discrete distribution of the traffic flow covered by edge servers negatively affects the service performance (e.g., overload and underload) of edge servers in multimedia IoV systems. Therefore, how to accurately schedule and dynamically reserve proper numbers of resources for multimedia services in edge servers is still challenging. To address this challenge, a traffic flow prediction driven resource reservation method, called TripRes, is developed in this article. Specifically, the city map is divided into different regions, and the edge servers in a region are treated as a “big edge server” to simplify the complex distribution of edge servers. Then, future traffic flows are predicted using the deep spatiotemporal residual network (ST-ResNet), and future traffic flows are used to estimate the amount of multimedia services each region needs to offload to the edge servers. With the number of services to be offloaded in each region, their offloading destinations are determined through latency-sensitive transmission path selection. Finally, the performance of TripRes is evaluated using real-world big data with over 100M multimedia surveillance records from RSUs in Nanjing China.

56 citations


Journal ArticleDOI
TL;DR: The experimental results show that the proposed DEQP2 model provides measurable privacy preservation without significantly reducing the accuracy.

Journal ArticleDOI
TL;DR: An integer programming based approach READ-O for solving the robustness-oriented Edge Application Deployment problem as a constrained optimization problem and its NP-hardness is proved, and an approximation algorithm READ-A for efficiently finding near-optimal solutions to large-scale problems is provided.
Abstract: Edge computing (EC) can overcome several limitations of cloud computing. In the EC environment, a service provider can deploy its application instances on edge servers to serve users with low latency. Given a limited budget K for deploying applications in a particular geographical area, some approaches have been proposed to achieves various optimization objectives, e.g., to maximize the servers' coverage, to minimize the average network latency, etc. However, the robustness of the services collectively delivered by the service provider's applications deployed on the edge servers has not been considered at all. This is a critical issue, especially in the highly distributed, dynamic and volatile EC environment. We make the first attempt to tackle this challenge. Specifically, we formulate this Robustness-oriented Edge Application Deployment(READ) problem as a constrained optimization problem and prove its NP-hardness. Then, we provide an integer programming based approach READ-O for solving it precisely, and an approximation algorithm READ-A for efficiently finding near-optimal solutions to large-scale problems. READ-A's approximation ratio is not worse than K/2, which is constant regardless of the total number of edge servers. Evaluation of the widely-used real-world dataset against five representative approaches demonstrates that our approaches can solve the READ problem effectively and efficiently.

Journal ArticleDOI
TL;DR: This study focuses on the fixed-time event-triggered time-varying formation tracking issue for a class of nonlinear multi-agent systems with multi-dimensional dynamics, uncertain disturbances and non-zero control input of leader.

Journal ArticleDOI
TL;DR: In this paper, a bipartite fixed-time time-varying output formation-containment tracking problem for heterogeneous linear multiagent systems with multiple leaders is investigated, where both cooperative communication and antagonistic communication between neighbor agents are taken into account.
Abstract: This study investigates the bipartite fixed-time time-varying output formation-containment tracking issue for heterogeneous linear multiagent systems with multiple leaders. Both cooperative communication and antagonistic communication between neighbor agents are taken into account. First, the bipartite fixed-time compensator is put forward to estimate the convex hull of leaders' states. Different from the existing techniques, the proposed compensator has the following three highlights: 1) it is continuous without involving the sign function, and thus, the chattering phenomenon can be avoided; 2) its estimation can be achieved within a fixed time; and 3) the communication between neighbors can not only be cooperative but also be antagonistic. Note that the proposed compensator is dependent on the global information of network topology. To deal with this issue, the fully distributed adaptive bipartite fixed-time compensator is further proposed. It can estimate not only the convex hull of leaders' states but also the leaders' system matrices. Based on the proposed compensators, the distributed controllers are then developed such that the bipartite time-varying output formation-containment tracking can be achieved within a fixed time. Finally, two examples are given to illustrate the feasibility of the main theoretical findings.

Journal ArticleDOI
TL;DR: A manually curated S cycling database (SCycDB) is developed to profile S cycling functional genes and taxonomic groups for shotgun metagenomes and is expected to be a useful tool for fast and accurate metagenomic analysis of S cycling microbial communities in the environment.
Abstract: Microorganisms play important roles in the biogeochemical cycling of sulphur (S), an essential element in the Earth's biosphere. Shotgun metagenome sequencing has opened a new avenue to advance our understanding of S cycling microbial communities. However, accurate metagenomic profiling of S cycling microbial communities remains technically challenging, mainly due to low coverage and inaccurate definition of S cycling gene families in public orthology databases. Here we developed a manually curated S cycling database (SCycDB) to profile S cycling functional genes and taxonomic groups for shotgun metagenomes. The developed SCycDB contains 207 gene families and 585,055 representative sequences affiliated with 52 phyla and 2684 genera of bacteria/archaea, and 20,761 homologous orthology groups were also included to reduce false positive sequence assignments. SCycDB was applied for functional and taxonomic analysis of S cycling microbial communities from four habitats (freshwater, hot spring, marine sediment and soil). Gene families and microorganisms involved in S reduction were abundant in the marine sediment, while those of S oxidation and dimethylsulphoniopropionate transformation were abundant in the soil. SCycDB is expected to be a useful tool for fast and accurate metagenomic analysis of S cycling microbial communities in the environment.

Journal ArticleDOI
TL;DR: An optimal approach named CEDC-IP is proposed to solve this Constrained Edge Data Caching problem as a constrained optimization problem from the service provider’s perspective and its NP-hardness is proved and its approximation ratio is proved.
Abstract: In recent years, edge computing, as an extension of cloud computing, has emerged as a promising paradigm for powering a variety of applications demanding low latency, e.g., virtual or augmented reality, interactive gaming, real-time navigation, etc. In the edge computing environment, edge servers are deployed at base stations to offer highly-accessible computing capacities to nearby end-users, e.g., CPU, RAM, storage, etc. From a service provider's perspective, caching app data on edge servers can ensure low latency in its users' data retrieval. Given constrained cache spaces on edge servers due to their physical sizes, the optimal data caching strategy must minimize overall user latency. In this paper, we formulate this Constrained Edge Data Caching (CEDC) problem as a constrained optimization problem from the service provider's perspective and prove its NP-hardness. We propose an optimal approach named CEDC-IP to solve this CEDC problem exactly with the Integer Programming technique. We also provide an approximation algorithm named CEDC-A for finding approximate solutions to large-scale CEDC problems efficiently and prove its approximation ratio. CEDC-IP and CEDC-A are evaluated on a real-world data set and a synthesized data set. The results demonstrate that they significantly outperform four representative approaches.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed an Activated Opinion Maximization Framework (AOMF) for signed social networks, which is composed of three phases: the selection of candidate seed nodes, the activated opinion formation process and the determination of seed nodes.

Journal ArticleDOI
TL;DR: In this paper, a game-theoretic approach named I-MEDAGame is proposed to formulate the interference-aware mobile edge device allocation (I-MEDA) problem as an IMEDA game.
Abstract: Mobile Edge Computing (MEC), as an emerging and prospective mobile computing paradigm, allows a content provider to serve its users by allocating their mobile devices to nearby edge servers to lower the latency in the delivery of its content to those mobile services. From the content provider's perspective, a cost-effective mobile device allocation (MDA) aims to allocate maximum mobile devices to minimum edge servers. However, the allocation of excessive mobile devices to an edge server may result in severe communication interference and consequently, impact mobile devices data rates. Sometimes, not all users mobile devices can be allocated to edge servers. Unallocated mobile devices can still retrieve content from the remote cloud through base stations, however with high latency. The connection between these mobile devices and the base stations also incur communication interference. We formally model this Interference-aware Mobile Edge Device Allocation (I-MEDA) problem and propose a game-theoretic based approach named I-MEDAGame to formulate the I-MEDA problem as an I-MEDA game. Our theoretical analysis of I-MEDAGame shows that it admits a Nash equilibrium. I-MEDAGame employs a novel decentralized algorithm to find the Nash equilibrium of the IMEDA game. The performance of I-MEDAGame is theoretically analyzed and experimentally evaluated.

Journal ArticleDOI
Rui Xiao1, Xi Jiang1, Yanhai Wang1, Qiang He1, Baoshan Huang1 
TL;DR: In this article, the Si-rich (>70%) but Ca- and Al-deficient precursor material for use in alkali-activated materials (AAMs) was identified.
Abstract: Urban waste glass powder (GP) has been identified as the Si-rich (>70%) but Ca- and Al-deficient precursor material for use in alkali-activated materials (AAMs). To facilitate the recycling...

Journal ArticleDOI
TL;DR: In this article, the authors analyzed bioinvasions in protected areas (PAs) and found that PAs will fare well with bioinvasive organisms, but whether they will resist bio-attacks remains unknown.
Abstract: The world has increasingly relied on protected areas (PAs) to rescue highly valued ecosystems from human activities, but whether PAs will fare well with bioinvasions remains unknown. By analyzing t...

Journal ArticleDOI
Shuai Li1, Zhiyao Yang2, Da Hu1, Liu Cao1, Qiang He1 
TL;DR: This review summarizes current progress in understanding the interactions between attributes of built environments and occupant behaviors that shape the structure and dynamics of indoor microbial communities and discusses the challenges and future research needs.
Abstract: Built environments, occupants, and microbiomes constitute a system of ecosystems with extensive interactions that impact one another. Understanding the interactions between these systems is essential to develop strategies for effective management of the built environment and its inhabitants to enhance public health and well-being. Numerous studies have been conducted to characterize the microbiomes of the built environment. This review summarizes current progress in understanding the interactions between attributes of built environments and occupant behaviors that shape the structure and dynamics of indoor microbial communities. In addition, this review also discusses the challenges and future research needs in the field of microbiomes of the built environment that necessitate research beyond the basic characterization of microbiomes in order to gain an understanding of the causal mechanisms between the built environment, occupants, and microbiomes, which will provide a knowledge base for the development of transformative intervention strategies toward healthy built environments. The pressing need to control the transmission of SARS-CoV-2 in indoor environments highlights the urgency and significance of understanding the complex interactions between the built environment, occupants, and microbiomes, which is the focus of this review.

Journal ArticleDOI
TL;DR: A framework to assess room-level outbreak risks in buildings by modeling built environment characteristics, occupancy information, and pathogen transmission is proposed and a web-based system is developed to provide timely information regarding outbreak risks to occupants and facility managers.

Journal ArticleDOI
TL;DR: In this paper, a decentralized game-theoretic approach is proposed to select a channel and edge server for each user while fulfilling their resource and data rate requirements in a multi-cell multi-channel downlink power-domain NOMA-based MEC system.
Abstract: Mobile edge computing (MEC) allows edge servers to be placed at cellular base stations. App vendors like Uber and YouTube can rent computing resources and deploy latency-sensitive applications on edge servers for their users to access. Non-orthogonal multiple access (NOMA) is an emerging technique that facilitates the massive connectivity of 5G networks, further enhancing the capability of MEC. The edge user allocation (EUA) problem faces new challenges in 5G NOMA-based MEC systems. In this study, we investigate the EUA problem in a multi-cell multi-channel downlink power-domain NOMA-based MEC system. The main objective is to help mobile app vendors maximize their benefit by allocating maximum users to edge servers in a specific area at the lowest computing resource and transmit power costs. To this end, we introduce a decentralized game-theoretic approach to effectively select a channel and edge server for each user while fulfilling their resource and data rate requirements. We theoretically and experimentally evaluate our solution, which significantly outperforms various state-of-the-art and baseline approaches.

Journal ArticleDOI
TL;DR: Adsorption of norfloxacin to iron ore waste was shown to be facilitated by the pH range of 2-10, low cation concentration, and low temperature, which are characteristic of natural surface waters, suggesting the potential of practical applications in aquatic environments.

Journal ArticleDOI
Yuliang Cai1, Huaguang Zhang1, Weihua Li1, Yunfei Mu1, Qiang He1 
TL;DR: In this paper, a distributed bipartite adaptive event-triggered fault-tolerant consensus tracking issue for linear multiagent systems in the presence of actuator faults based on the output feedback control protocol is considered.
Abstract: This article considers the distributed bipartite adaptive event-triggered fault-tolerant consensus tracking issue for linear multiagent systems in the presence of actuator faults based on the output feedback control protocol. Both time-varying additive and multiplicative actuator faults are taken into account in the meantime. And the upper/lower bounds of actuator faults are not required to be known. First, the state observer is designed to settle the occurrence of unmeasurable system states. Two kinds of event-triggered mechanisms are then developed to schedule the interagent communication and controller updates. Next, with the developed event-triggered mechanisms, a novel observer-based bipartite adaptive control strategy is proposed such that the fault-tolerant control problem can be addressed. Compared with some related works on this topic, our control scheme can achieve the intermittent communication and intermittent controller updates, and the more general actuator faults and network topology are considered. It is proved that the exclusion of Zeno behavior can be realized. Finally, three illustrative examples are given to demonstrate the feasibility of the main theoretical findings.

Journal ArticleDOI
TL;DR: CooperEDI as discussed by the authors employs a distributed consensus mechanism to form a self-management edge caching system, where edge servers cooperatively ensure the integrity of cached replicas and repair corrupted ones.
Abstract: The new mobile edge computing (MEC) paradigm fundamentally changes the data caching technique by allowing data to be cached on edge servers attached to base stations within hundreds of meters from users. It provides a bounded latency guarantee for latency-sensitive applications, e.g., interactive AR/VR applications, online gaming, etc. However, in the highly distributed MEC environment, cache data is subject to corruption and their integrity must be ensured. Existing centralized data integrity assurance schemes are rendered obsolete by the unique characteristics of MEC, i.e., unlike cloud servers, edge servers have only limited computing and storage resources and they are deployed massively and distributed geographically. Thus, it is a new and significant challenge to ensure cache data integrity over tremendous geographically-distributed resource-constrained edge servers. This paper proposes the CooperEDI scheme to guarantee the edge data integrity in a distributed manner. CooperEDI employs a distributed consensus mechanism to form a self-management edge caching system. In the system, edge servers cooperatively ensure the integrity of cached replicas and repair corrupted ones. We experimentally evaluate its performance against three representative schemes. The results demonstrate that CooperEDI can effectively and efficiently ensure cache data integrity in the MEC environment.

Journal ArticleDOI
TL;DR: This work proposes Outer Product Enhanced Heterogeneous Information Network Embedding for Recommendation, called HopRec, to utilize the outer product to model the pairwise relationship between user HIN embedding and item Hin embedding, and obtains a two-dimensional interaction matrix.
Abstract: With the rapid development of the internet, more and more sophisticated data can be utilized by recommendation systems to improve their performance. Such data consist of heterogeneous information networks (HINs) made up of multiple nodes and link types. A critical challenge is how to effectively extract and apply the useful HIN information. In particular, the embedding-based recommendation approach has been widely used, as it can extract affluent semantic and structural information from HINs. However, the existing HIN embedding for recommendation methods only combine user embedding and item embedding through a simple concatenation or elementwise product, which does not suffer for an efficient recommendation model. In order to extract and utilize more comprehensive and subtle information from the embedding for recommendation, we propose Outer Product Enhanced Heterogeneous Information Network Embedding for Recommendation, called HopRec. The main idea is to utilize the outer product to model the pairwise relationship between user HIN embedding and item HIN embedding. Specifically, by performing an outer product between user HIN embedding and item HIN embedding, we can obtain a two-dimensional interaction matrix. Subsequently, we can obtain a rating prediction function by integrating matrix factorization (MF), user HIN embedding, item HIN embedding and interaction matrix. The results of experiments conducted on three open benchmark datasets show that HopRec significantly outperforms the state-of-the-art methods.

Journal ArticleDOI
TL;DR: This article presents and formulate this multiple edge application deployment (MEAD) problem in the MEC environment, aiming to maximize app users’ overall service quality at minimum deployment cost, considering application shareability and communication interference, and proposes a heuristic approach, namely, the deployment-priority greedy via the divide-and-conquer strategy (DPG-D&C).
Abstract: Mobile edge computing (MEC), as an emerging computing paradigm, allows app vendors to deploy their mobile and/or IoT applications on edge servers to deliver low-latency services to their app users. However, when an edge server needs to serve excessive app users concurrently, severe interference is incurred, which immediately reduces app users’ achievable data rates and consequently impacts their perceived service quality. This is a major challenge to the app vendor’s attempt to minimize the edge resources required for serving its app users with satisfactory service quality. To tackle this challenge, in this paper, we present and formulate this multiple edge application deployment (MEAD) problem in the MEC environment, aiming to maximize app users’ overall service quality at minimum deployment cost, considering application shareability and communication interference. We prove that the MEAD problem is NP-hard. Then, we propose a heuristic approach, namely deployment-priority greedy via divide-and-conquer strategy (DPG-D&C), to solve the MEAD problem effectively and efficiently. We evaluate our approach extensively by using a widely-used real-world dataset. The experimental results show that DPG-D&C significantly outperforms state-of-the-art approaches.

Journal ArticleDOI
TL;DR: In this paper, high-throughput sequencing analysis revealed significantly increased presence of Legionella due to extreme water stagnation, highlighting elevated exposure risks to Legionella from building water systems during re-opening of previously closed buildings.

Journal ArticleDOI
TL;DR: CoRec as discussed by the authors uses a context-aware encoder-decoder model that randomly selects the previous output of the decoder or the embedding vector of a ground truth word as context to make the model gradually aware of previous alignment choices.
Abstract: Commit messages recorded in version control systems contain valuable information for software development, maintenance, and comprehension. Unfortunately, developers often commit code with empty or poor quality commit messages. To address this issue, several studies have proposed approaches to generate commit messages from commit diffs. Recent studies make use of neural machine translation algorithms to try and translate git diffs into commit messages and have achieved some promising results. However, these learning-based methods tend to generate high-frequency words but ignore low-frequency ones. In addition, they suffer from exposure bias issues, which leads to a gap between training phase and testing phase. In this article, we propose CoRec to address the above two limitations. Specifically, we first train a context-aware encoder-decoder model that randomly selects the previous output of the decoder or the embedding vector of a ground truth word as context to make the model gradually aware of previous alignment choices. Given a diff for testing, the trained model is reused to retrieve the most similar diff from the training set. Finally, we use the retrieval diff to guide the probability distribution for the final generated vocabulary. Our method combines the advantages of both information retrieval and neural machine translation. We evaluate CoRec on a dataset from Liu et al. and a large-scale dataset crawled from 10K popular Java repositories in Github. Our experimental results show that CoRec significantly outperforms the state-of-the-art method NNGen by 19% on average in terms of BLEU.