scispace - formally typeset
Search or ask a question
Author

Anmin Fu

Bio: Anmin Fu is an academic researcher from Nanjing University of Science and Technology. The author has contributed to research in topics: Computer science & Cloud computing. The author has an hindex of 16, co-authored 79 publications receiving 858 citations. Previous affiliations of Anmin Fu include Nanjing University of Posts and Telecommunications & Xidian University.


Papers
More filters
Journal ArticleDOI
TL;DR: This article proposes a privacy-preserving federated learning scheme in fog computing that can not only guarantee both data security and model security but completely resist collusion attacks launched by multiple malicious entities.
Abstract: Federated learning can combine a large number of scattered user groups and train models collaboratively without uploading data sets, so as to avoid the server collecting user sensitive data. However, the model of federated learning will expose the training set information of users, and the uneven amount of data owned by users in multiple users’ scenarios will lead to the inefficiency of training. In this article, we propose a privacy-preserving federated learning scheme in fog computing. Acting as a participant, each fog node is enabled to collect Internet-of-Things (IoT) device data and complete the learning task in our scheme. Such design effectively improves the low training efficiency and model accuracy caused by the uneven distribution of data and the large gap of computing power. We enable IoT device data to satisfy $\varepsilon $ -differential privacy to resist data attacks and leverage the combination of blinding and Paillier homomorphic encryption against model attacks, which realize the security aggregation of model parameters. In addition, we formally verified our scheme can not only guarantee both data security and model security but completely resist collusion attacks launched by multiple malicious entities. Our experiments based on the Fashion-MNIST data set prove that our scheme is highly efficient in practice.

132 citations

Journal ArticleDOI
TL;DR: This paper proposes a new privacy-aware public auditing mechanism for shared cloud data by constructing a homomorphic verifiable group signature that eliminates the abuse of single-authority power and provides non-frameability.
Abstract: Today, cloud storage becomes one of the critical services, because users can easily modify and share data with others in cloud. However, the integrity of shared cloud data is vulnerable to inevitable hardware faults, software failures or human errors. To ensure the integrity of the shared data, some schemes have been designed to allow public verifiers (i.e., third party auditors) to efficiently audit data integrity without retrieving the entire users’ data from cloud. Unfortunately, public auditing on the integrity of shared data may reveal data owners’ sensitive information to the third party auditor. In this paper, we propose a new privacy-aware public auditing mechanism for shared cloud data by constructing a homomorphic verifiable group signature. Unlike the existing solutions, our scheme requires at least t group managers to recover a trace key cooperatively, which eliminates the abuse of single-authority power and provides nonframeability. Moreover, our scheme ensures that group users can trace data changes through designated binary tree; and can recover the latest correct data block when the current data block is damaged. In addition, the formal security analysis and experimental results indicate that our scheme is provably secure and efficient.

110 citations

Journal ArticleDOI
TL;DR: This article proposes the VFL, a verifiable federated learning with privacy-preserving for big data in industrial IoT that uses Lagrange interpolation to elaborately set interpolation points for verifying the correctness of the aggregated gradients.
Abstract: Due to the strong analytical ability of big data, deep learning has been widely applied to model on the collected data in industrial IoT. However, for privacy issues, traditional data-gathering centralized learning is not applicable to industrial scenarios sensitive to training sets, such as face recognition and medical systems. Recently, federated learning has received widespread attention, since it trains a model by only sharing gradients without accessing training sets. But existing researches reveal that the shared gradient still retains the sensitive information of the training set. Even worse, a malicious aggregation server may return forged aggregated gradients. In this paper, we propose the VFL, a verifiable federated learning with privacy-preserving for big data in industrial IoT. Specifically, we use Lagrange interpolation to elaborately set interpolation points for verifying the correctness of the aggregated gradients. Compared with existing schemes, the verification overhead of VFL remains constant regardless of the number of participants. Moreover, we employ the blinding technology to protect the privacy of the privacy gradients. If no more than n-2 of n participants collude with the aggregation server, VFL could guarantee the encrypted gradients of other participants not being inverted. Experimental evaluations corroborate the practical performance of the presented VFL with high accuracy and efficiency.

85 citations

Posted Content
TL;DR: This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning, and presents key areas for future research on the backdoor, such as empirical security evaluations from physical trigger attacks, and more efficient and practical countermeasures are solicited.
Abstract: This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning. According to the attacker's capability and affected stage of the machine learning pipeline, the attack surfaces are recognized to be wide and then formalized into six categorizations: code poisoning, outsourcing, pretrained, data collection, collaborative learning and post-deployment. Accordingly, attacks under each categorization are combed. The countermeasures are categorized into four general classes: blind backdoor removal, offline backdoor inspection, online backdoor inspection, and post backdoor removal. Accordingly, we review countermeasures, and compare and analyze their advantages and disadvantages. We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks, and iii) verifying data deletion requested by the data contributor.Overall, the research on defense is far behind the attack, and there is no single defense that can prevent all types of backdoor attacks. In some cases, an attacker can intelligently bypass existing defenses with an adaptive attack. Drawing the insights from the systematic review, we also present key areas for future research on the backdoor, such as empirical security evaluations from physical trigger attacks, and in particular, more efficient and practical countermeasures are solicited.

80 citations

Journal ArticleDOI
TL;DR: A summary of the existing literature on data integrity verification (DIV) is collated, aiming to present a solid and stimulating review of current academic achievements for interested readers.

65 citations


Cited by
More filters
Posted Content
TL;DR: This paper defines and explores proofs of retrievability (PORs), a POR scheme that enables an archive or back-up service to produce a concise proof that a user can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.
Abstract: In this paper, we define and explore proofs of retrievability (PORs). A POR scheme enables an archive or back-up service (prover) to produce a concise proof that a user (verifier) can retrieve a target file F, that is, that the archive retains and reliably transmits file data sufficient for the user to recover F in its entirety.A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a large file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of F. In addition to proposing new, practical POR constructions, we explore implementation considerations and optimizations that bear on previously explored, related schemes.In a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition whose formulation is another contribution of our work.We view PORs as an important tool for semi-trusted online archives. Existing cryptographic techniques help users ensure the privacy and integrity of files they retrieve. It is also natural, however, for users to want to verify that archives do not delete or modify files prior to retrieval. The goal of a POR is to accomplish these checks without users having to download the files themselves. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.

1,783 citations

01 Jan 2007
TL;DR: In this paper, the authors provide updates to IEEE 802.16's MIB for the MAC, PHY and asso-ciated management procedures in order to accommodate recent extensions to the standard.
Abstract: This document provides updates to IEEE Std 802.16's MIB for the MAC, PHY and asso- ciated management procedures in order to accommodate recent extensions to the standard.

1,481 citations

Journal Article
TL;DR: Calculations are developed and examined to reduce the entire quantity of Wireless access points as well as their locations in almost any given atmosphere while with the throughput needs and the necessity to ensure every place in the area can achieve a minimum of k APs.
Abstract: More particularly, calculations are developed and examined to reduce the entire quantity of Wireless access points as well as their locations in almost any given atmosphere while with the throughput needs and the necessity to ensure every place in the area can achieve a minimum of k APs. This paper concentrates on using Wireless for interacting with and localizing the robot. We've carried out thorough studies of Wireless signal propagation qualities both in indoor and outside conditions, which forms the foundation for Wireless AP deployment and communication to be able to augment how human operators communicate with this atmosphere, a mobile automatic platform is developed. Gas and oil refineries could be a harmful atmosphere for various reasons, including heat, toxic gasses, and unpredicted catastrophic failures. When multiple Wireless APs are close together, there's a possible for interference. A graph-coloring heuristic can be used to find out AP funnel allocation. Additionally, Wireless fingerprinting based localization is developed. All of the calculations implemented are examined in real life situations using the robot developed and answers are promising. For example, within the gas and oil industry, during inspection, maintenance, or repair of facilities inside a refinery, people might be uncovered to seriously high temps to have a long time, to toxic gasses including methane and H2S, and also to unpredicted catastrophic failures.

455 citations

Journal ArticleDOI
TL;DR: In this paper, a comprehensive survey of the emerging applications of federated learning in IoT networks is provided, which explores and analyzes the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing and IoT privacy and security.
Abstract: The Internet of Things (IoT) is penetrating many facets of our daily life with the proliferation of intelligent services and applications empowered by artificial intelligence (AI). Traditionally, AI techniques require centralized data collection and processing that may not be feasible in realistic application scenarios due to the high scalability of modern IoT networks and growing data privacy concerns. Federated Learning (FL) has emerged as a distributed collaborative AI approach that can enable many intelligent IoT applications, by allowing for AI training at distributed IoT devices without the need for data sharing. In this article, we provide a comprehensive survey of the emerging applications of FL in IoT networks, beginning from an introduction to the recent advances in FL and IoT to a discussion of their integration. Particularly, we explore and analyze the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing, and IoT privacy and security. We then provide an extensive survey of the use of FL in various key IoT applications such as smart healthcare, smart transportation, Unmanned Aerial Vehicles (UAVs), smart cities, and smart industry. The important lessons learned from this review of the FL-IoT services and applications are also highlighted. We complete this survey by highlighting the current challenges and possible directions for future research in this booming area.

319 citations