Project Overview
I am Cole Feuer, a master’s student in Computer Science at RPI. This project empirically compares three federated aggregation methods—FedAvg, FedLAMA, and FedDist—across vision, tabular, and text domains using a unified evaluation pipeline.
Technical Summary
- FedAvg: Standard weight‑averaging of client updates.
- FedLAMA: Layer‑wise adaptive aggregation accounting for gradient variability.
- FedDist: Soft‑label distillation via a small public dataset.
Each method was evaluated on:
- CNN on CIFAR‑10 (image classification)
- MLP on MIMIC‑III (clinical tabular data)
- DistilBERT on Sentiment140 (text sentiment)
Code & Results
Full codebase, notebooks, and detailed results are available on GitHub.
Key Findings
- FedLAMA yielded more stable convergence in smaller‑scale models.
- FedDist excelled when a modest public dataset was available for distillation.
- FedAvg remained highly competitive on balanced‑data tasks.