Download PDFOpen PDF in browserFederated Learning Methods for Analytics of Big and Sensitive Distributed Data and SurveyEasyChair Preprint 98195 pages•Date: March 5, 2023AbstractThis article focuses on analytics of big distributed sensitive data on a federated learning base. The main current focus is on the most common use technology platforms: TensorFlow Federated, PySyft, Flower and IBM Federated Learning of the point of view edge computing usability. Training PyTorch models with differential privacy (DP) is more scalable than existing state-of-the-art methods. Differential privacy is a mathematically rigorous framework for quantifying the anonymisation of sensitive data. It’s often used in analytics, with growing interest in the machine learning (ML) community. Training distributed data at the edge is interesting for privacy sensitivity and the transfer of huge data. Sensitivity and huge data is the main challenge in federated learning. Federated learning is a solution for protecting huge device data through model updates. Keyphrases: Artificial Intelligence, Federated Learning, TensorFlow Federated, differential privacy, flower
|