A4 Article in conference proceedings
Communication-Efficient Federated Learning in Channel Constrained Internet of Things (2022)
Hu, T., Zhang, X., Chang, Z., Hu, F., & Hämäläinen, T. (2022). Communication-Efficient Federated Learning in Channel Constrained Internet of Things. In GLOBECOM 2022 IEEE Global Communications Conference (pp. 275-280). IEEE. IEEE Global Communications Conference. https://doi.org/10.1109/globecom48099.2022.10000898
JYU authors or editors
Publication details
All authors or editors: Hu, Tao; Zhang, Xinran; Chang, Zheng; Hu, Fengye; Hämäläinen, Timo
Parent publication: GLOBECOM 2022 IEEE Global Communications Conference
Place and date of conference: Rio de Janeiro, Brazil, 4.-8.12.2022
ISBN: 978-1-6654-3541-3
eISBN: 978-1-6654-3540-6
Journal or series: IEEE Global Communications Conference
ISSN: 2334-0983
eISSN: 2576-6813
Publication year: 2022
Publication date: 11/01/2023
Pages range: 275-280
Publisher: IEEE
Publication country: United States
Publication language: English
DOI: https://doi.org/10.1109/globecom48099.2022.10000898
Publication open access: Not open
Publication channel open access:
Publication is parallel published (JYX): https://jyx.jyu.fi/handle/123456789/85533
Abstract
Federated learning (FL) is able to utilize the computing capability and maintain the privacy of the end devices by collecting and aggregating the locally trained learning model parameters while keeping the local personal data. As the most widely-used FL framework,Jederated averaging (FedAvg) suffers an expensive communication cost especially when there are large amounts of devices involving the FL process. Moreover, when considering asynchronous FL, the slowest device becomes the bottleneck for the cask effect and determines the overall latency. In this work, we propose a communication-efficient federated learning framework with partial model aggregation (CE-FedPA) algorithm to utilize compression strategy and weighted device selection, which can significantly reduce the size of uploaded data and decrease the communication time. We perform a series of experiments on the MNIST/CIFAR-10 datasets, in both lID and non-lID data settings. We compare the communication time of different aggregation schemes, in terms of iteration rounds and target accuracy. Simulation results demonstrate that the uploading time of the proposed scheme is up to 4.3 times shorter than other existing ones. Experiments on an end - to-end FL framework also verify the communication efficiency of CE-FedPA in a real-world setting.
Keywords: Internet of things; machine learning; data transfer; data protection; simulation
Free keywords: performance evaluation; training; data privacy; costs; federated learning; simulation; data integrity
Contributing organizations
Ministry reporting: Yes
Reporting Year: 2022
JUFO rating: 1