What are the limlitations of federated learning in tterm of privacy ?5 answersFederated Learning (FL) has limitations in terms of privacy. Existing FL methods require synchronized communication and suffer from privacy leakage risk. Privacy concerns restrict access to medical data, preventing the full exploitation of deep learning techniques in healthcare. Personalized FL addresses the issue of non-identically independently distributed data but raises privacy concerns due to the exchange of patient-level information. To address this, Privacy-preserving Community-Based Federated machine Learning (PCBFL) uses Secure Multiparty Computation to securely calculate patient-level similarity scores across hospitals. Additionally, the need for uncertainty quantification and data privacy constraints in FL is addressed by training customized local Bayesian models that provide characterizations of model uncertainties. To ensure privacy in personalized graph federated learning, a variant of the PGFL implementation utilizes differential privacy, specifically zero-concentrated differential privacy.
WHAT IS federated learning ?4 answersFederated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data. FL addresses the challenges of training models in a distributed environment with heterogeneous data distributions across clients. It suffers from the 'client-drift' problem where every client converges to its own local optimum, resulting in slower convergence and poor performance of the aggregated model. To overcome this limitation, regularization techniques based on adaptive self-distillation (ASD) have been proposed. These techniques adaptively adjust to the client's training data based on the closeness of the local model's predictions with that of the global model and the client's label distribution. The proposed regularization can be integrated with existing FL algorithms, leading to improved performance.
Whatś the state of art of federated learning?5 answersFederated learning (FL) is a new and promising paradigm that allows devices to learn without sharing data with the centralized server. FL models based on game theory (GT) have been developed to maximize profit, ensure authentication, privacy management, trust management, and threat detection. However, the inherent characteristics of federated learning pose security concerns, such as backdoor attacks. These attacks can introduce backdoored functionality into the global model, leading to misclassification of malicious images. To address this, defenses including anomaly updates detection, robust federated training, and backdoored model restoration have been proposed. Additionally, the heterogeneity of data distributions, model architectures, network environments, and hardware devices among participant clients in practical federated learning has led to the development of Heterogeneous Federated Learning (HFL) methods. These methods address challenges in statistical heterogeneity, model heterogeneity, communication heterogeneity, device heterogeneity, and more. The state-of-the-art in federated learning includes these advancements and future research directions in HFL.
How federated learning for smart city can be used?5 answersFederated learning can be used in smart cities to address data privacy concerns and improve the performance of AI models. It allows multiple edge nodes to collaboratively train a global model while keeping their raw data local, protecting data privacy. This approach has been applied to various smart city applications such as urban traffic flow forecasting, intrusion detection in vehicular ad hoc networks (VANETs), and integration with smart city applications for privacy preservation and sensitive information protection. In the context of smart cities, federated learning enables the development of local, deep learning-based classifiers for data streams, which can be shared among vehicles or devices to improve accuracy and reduce communication overhead. Additionally, offloading federated learning models in edge-cloud collaborative smart cities can improve the efficiency of model transmission and aggregation while maintaining resource utilization.
What scenario Federated Learning can use?5 answersFederated Learning can be used in scenarios where there is distributed data and the goal is to train machine learning or deep learning models while protecting data privacy. It is particularly effective in addressing network training under local data heterogeneity and can improve the speed of model aggregation by taking similarity into account as an influence factor. Additionally, Federated Learning can be applied in real-world scenarios with rapidly changing environments and heterogeneous hardware settings, where a synchronous protocol may be inflexible. Asynchronous Federated Learning combined with a novel asynchronous model aggregation protocol has been shown to significantly improve prediction performance while maintaining the same level of accuracy as centralized machine learning. Furthermore, Federated Learning can be used in extensive heterogeneous settings, providing excellent convergence speed, accuracy, and computation/communication efficiency.
What are the most recent proposed work of federated learning methods?5 answersFederated Learning is a popular method for training neural networks on distributed datasets. Recent proposed work in federated learning includes the introduction of Centered Kernel Alignment (CKA) into the loss function to compute the similarity of feature maps in the output layer, resulting in faster model aggregation and improved global model accuracy in non-IID scenarios. Another recent approach involves using structured variational inference, adapted for the federated setting, to enable model training across distributed data sources without data leaving their original locations. Additionally, a secure federated graph learning system called S-Glint has been designed to tackle the challenge of communication bottlenecks in federated graph learning, achieving better performance than existing solutions. Finally, a novel federated learning method has been developed for imbalanced data by directly optimizing the area under curve (AUC) score, with favorable theoretical results and efficacy demonstrated through extensive experiments.