This is an evaluation work for Model Heterogeneous Federated Learning (MHFL). It also aims to provide an easy-to-use MHFL benchmark.
这是一个模型异构联邦学习的评估性工作, 同时旨在提供一个易于使用的模型异构联邦学习基准。
The code of this project is under construction.
项目的代码正在完善中~~
All code implementations are based on the FederatedScope V0.3.0: https://github.com/alibaba/FederatedScope
We are grateful for their outstanding work.
- LOCAL: each client only performs local training without the FL process.
Abbreviation | Title | Venue | Materials |
---|---|---|---|
FedMD | FedMD: Heterogenous Federated Learning via Model Distillation | NeurIPS 2019 Workshop | [pub] [repository] |
FSFL | Few-Shot Model Agnostic Federated Learning | ACM MM 2022 | [pub] [repository] |
FCCL | Learn from Others and Be Yourself in Heterogeneous Federated Learning | CVPR 2022 | [pub] [repository] |
Abbreviation | Title | Venue | Materials |
---|---|---|---|
FML | Federated Mutual Learning | ArXiv 2020 | [pub] [repository] |
FedHeNN | Architecture Agnostic Federated Learning for Neural Networks | ICML 2022 | [pub] |
FedProto | FedProto: Federated Prototype Learning across Heterogeneous Clients | AAAI 2022 | [pub] [repository] |
FedPCL | Federated Learning from Pre-Trained Models: A Contrastive Learning Approach | NeurIPS 2022 | [pub] [repository] |
FedGH | FedGH: Heterogeneous Federated Learning with Generalized Global Header | ACM MM 2023 | [pub] [repository] |
Abbreviation | Title | Venue | progress bar |
---|---|---|---|
DENSE | DENSE: Data-Free One-Shot Federated Learning | NeurIPS 2022 | [########--]95% |
FedKD | Communication-efficient federated learning via knowledge distillation | NC 2022 | [----------]0% |
FedDistill | Federated Knowledge Distillation | Machine Learning and Wireless Communications (2022) | [----------]0% |
TBD | ... | ... | ... |
We hope to test whether the above methods work well under different model heterogeneous setups. To this end, we conduct experiments with the following two settings.
- Low model heterogeneity: We used five CNN models that are a bit different in terms of the number of channels and layers.
- High model heterogeneity: We tried five models with large differences, including MLP, CNN, ResNet, and wide ResNet.
For details of the model architecture, please refer to: MHFL/federatedscope/model_heterogeneity/model_settings
Currently, we conduct experiments on three benchmark datasets: CIFAR-10, SVHN, and office-10.
Users need to clone the source code and install FederatedScope (we suggest python version >= 3.9).
- clone the source code
git clone https://github.com/zza234s/MHFL
cd MHFL
- install the required packages:
conda create -n fs python=3.9
conda activate fs
# Install pytorch
conda install -y pytorch=1.10.1 torchvision=0.11.2 torchaudio=0.10.1 torchtext=0.11.1 cudatoolkit=11.3 -c pytorch -c conda-forge
- Next, after the required packages is installed, you can install FederatedScope from
source
:
pip install -e .[dev]
- Enter the "federatedscope" folder
cd federatedscope
- Run the script
python main.py --cfg model_heterogeneity/methods/FedProto/FedProto_on_cifar10.yaml --client_cfg model_heterogeneity/model_settings/model_setting_CV_low_heterogeneity.yaml
Please feel free to contact me.
My email is: hanlinzhou@zjut.edu.cn
My WeChat is: poipoipoi8886