Skip to content

PyTorch implementation of Compressed Private Aggregation for Scalable Federated Learning over Massive Networks (IEEE TMC 2025, IEEE ICASSP 2023)

Notifications You must be signed in to change notification settings

langnatalie/CPA

Repository files navigation

Compressed Private Aggregation for Scalable Federated Learning over Massive Networks

Nestedcpa

Introduction

In this work we propose a method for Compressed Private Aggregation for Scalable Federated Learning over Massive Networks (CPA), which allows large-scale deployments to simultaneously communicate at extremely low bit-rates while achieving privacy, anonymity, and resilience to malicious users. Please refer to our paper for more details.

Usage

This code has been tested on Python 3.7.3, PyTorch 1.8.0 and CUDA 11.1.

Prerequisite

  1. PyTorch=1.8.0: https://pytorch.org
  2. scipy
  3. tqdm
  4. matplotlib
  5. torchinfo
  6. TensorboardX: https://github.com/lanpa/tensorboardX

Training

python main.py --exp_name cpa --aggregation_method CPA --compression scalarQ --privacy RR --num_users 1000 --epsilon 0.5

Testing

python main.py --exp_name cpa --eval 

About

PyTorch implementation of Compressed Private Aggregation for Scalable Federated Learning over Massive Networks (IEEE TMC 2025, IEEE ICASSP 2023)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages