You do not have permission to edit this page, for the following reason:
The action you have requested is limited to users in the group: Users.
Project description (free text)
Give a concise project description. Include:
The advent of distributed Machine Learning (ML) promoted sophisticated analytics at the network's edge. This decentralized and large-scale ML architecture is known as Federated Learning (FL). FL aims to enable multiple actors to build a common and robust ML model over multiple local dataset. Furthermore, the new wave of FL frameworks promotes data privacy, security, access rights, and access to heterogeneous data. However, the variety of these frameworks require an experimental evaluation of performance analysis. Therefore, this project aims to analyze, evaluate, compare and conclude the popular federated learning frameworks extensively (A similar comparison paper can be found here: https://link.springer.com/content/pdf/10.1007/s10586-021-03240-4.pdf ). The main intended tasks of this project are: Evaluating the following open-source federated learning frameworks. Paddle Federated Learning Framework (https://github.com/PaddlePaddle/PaddleFL) PySyft Framework /pygrid (https://github.com/OpenMined/PySyft ) Flower (https://github.com/adap/flower) TensorFlow FL (https://github.com/tensorflow/federated ) FEDn (https://github.com/scaleoutsystems/fedn ) Intel FL (https://github.com/intel/openfl) FATE (https://github.com/FederatedAI/FATE ) Benchmarking suit (Experiment design), i.e., network architecture (the ML model, e.g., LSTM, CNN, etc.), the used datasets, benchmark tool/framework. (Mnist, Cifer10 & 100, IMDB, for IoT data CASA activity recognition). Theoretically comparing the federated algorithm they support (FedAVG, FedProx, etc.), cross-device and cross-silo, horizontal and vertical federated learning. Also, open-source, Diversified Computing Paradigms (Standalone simulation, Distributed computing, on-device training), ML heterogeneity (Pytorch, TensorFlow, MXnet,...etc.), development coding language, Framework's timeline. Use of existing and development of comparison criteria, performance (task per time, aka, throughput), resources consumption (CPU, Memory, GPU), convergence, deployment effort, Flexibility, accuracy, scalability.
Summary:
This is a minor edit Watch this page
Cancel
Home
Research
Education
Partners
People
Contact