高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于自适应梯度压缩的高效联邦学习通信机制研究

唐伦 汪智平 蒲昊 吴壮 陈前斌

唐伦, 汪智平, 蒲昊, 吴壮, 陈前斌. 基于自适应梯度压缩的高效联邦学习通信机制研究[J]. 电子与信息学报, 2023, 45(1): 227-234. doi: 10.11999/JEIT211262
引用本文: 唐伦, 汪智平, 蒲昊, 吴壮, 陈前斌. 基于自适应梯度压缩的高效联邦学习通信机制研究[J]. 电子与信息学报, 2023, 45(1): 227-234. doi: 10.11999/JEIT211262
TANG Lun, WANG Zhiping, PU Hao, WU Zhuang, CHEN Qianbin. Research on Efficient Federated Learning Communication Mechanism Based on Adaptive Gradient Compression[J]. Journal of Electronics & Information Technology, 2023, 45(1): 227-234. doi: 10.11999/JEIT211262
Citation: TANG Lun, WANG Zhiping, PU Hao, WU Zhuang, CHEN Qianbin. Research on Efficient Federated Learning Communication Mechanism Based on Adaptive Gradient Compression[J]. Journal of Electronics & Information Technology, 2023, 45(1): 227-234. doi: 10.11999/JEIT211262

基于自适应梯度压缩的高效联邦学习通信机制研究

doi: 10.11999/JEIT211262
基金项目: 国家自然科学基金(62071078), 重庆市教委科学技术研究项目(KJZD-M201800601), 川渝联合实施重点研发项目(2021YFQ0053)
详细信息
    作者简介:

    唐伦:男,教授,博士,研究方向为下一代无线通信网络、异构蜂窝网络、软件定义网络等

    汪智平:男,硕士生,研究方向为边缘智能计算协同机理、联邦学习通信优化等

    蒲昊:男,硕士生,研究方向为边缘智能计算资源分配与协同机理等

    吴壮:男,硕士生,研究方向为边缘智能计算资源分配、无人机动态规划等

    陈前斌:男,教授,博士生导师,研究方向为个人通信、多媒体信息处理与传输、异构蜂窝网络等

    通讯作者:

    汪智平 2609116705@qq.com

  • 中图分类号: TN929.5

Research on Efficient Federated Learning Communication Mechanism Based on Adaptive Gradient Compression

Funds: The National Natural Science Foundation of China (62071078), The Science and Technology Research Program of Chongqing Municipal Education Commission (KJZD-M201800601), Sichuan and Chongqing Key R&D Projects (2021YFQ0053)
  • 摘要: 针对物联网(IoTs)场景下,联邦学习(FL)过程中大量设备节点之间因冗余的梯度交互通信而带来的不可忽视的通信成本问题,该文提出一种阈值自适应的梯度通信压缩机制。首先,引用了一种基于边缘-联邦学习的高效通信(CE-EDFL)机制,其中边缘服务器作为中介设备执行设备端的本地模型聚合,云端执行边缘服务器模型聚合及新参数下发。其次,为进一步降低联邦学习检测时的通信开销,提出一种阈值自适应的梯度压缩机制(ALAG),通过对本地模型梯度参数压缩,减少设备端与边缘服务器之间的冗余通信。实验结果表明,所提算法能够在大规模物联网设备场景下,在保障深度学习任务完成准确率的同时,通过降低梯度交互通信次数,有效地提升了模型整体通信效率。
  • 图  1  基于边缘-联邦学习的高效通信检测模型

    图  2  阈值自适应选择机制

    图  3  不同α取值下的CCI值

    图  4  不同客户端数量下模型性能对比

    图  5  4种模型训练损失对比

    图  6  4种模型检测精准度对比

    图  7  4种模型全局所需通信次数

    算法1 基于边缘-联邦学习的高效通信算法
     输入:云端初始化参数$ {\omega _0} $,客户端数量N,边缘设备L
     输出:全局模型参数$ \omega (k) $
     (1) for $ k = 1,2, \cdots ,K $ do
     (2)   for each Client $ i = 1,2, \cdots ,N $ in parallel do
     (3)    使用式(3)计算本地更新梯度$ \omega _i^l(k) $
     (4)    end for
     (5)   if $ k|{K_1} = 0 $ then
     (6)     for each Edge server $ l = 1,2, \cdots ,L $ in parallel do
     (7)      使用式(4)计算参数$ {\omega ^l}(k) $
     (8)      if $ k|{K_1}{K_2} \ne 0 $ then
     (9)      该边缘端下所有设备参数保持不变:
            $ {\omega ^l}(k) \leftarrow \omega _i^l(k) $
     (10)      end if
     (11)     end for
     (12)   end if
     (13)   if $ k|{K_1}{K_2} = 0 $ then
     (14)     使用式(5)计算参数$ \omega (k) $
     (15)     for each Client $ i = 1,2, \cdots ,N $ in parallel do
     (16)     设备端参数更新为云端参数:$ \omega (k) \leftarrow \omega _i^l(k) $
     (17)     end for
     (18)   end if
     (19) end for
    下载: 导出CSV
    算法2 一种阈值自适应的梯度压缩算法
     输入:设备端节点m当前所处迭代k,总迭代次数K,初始化全局
        梯度$ \nabla F $
     输出:完成训练并符合模型要求的设备节点$ {M_{\text{L}}} $,M为设备节点
        集合
     (1) 初始化全局下发参数$ \omega (k - 1) $
     (2)  for $ k = 1,2, \cdots ,K $
     (3)    for $ m = 1,2, \cdots ,M $ do
     (4)    计算当前m节点下的本地参数梯度$ \nabla {F_m}(\theta (k - 1)) $
     (5)    判断参数梯度是否满足梯度自检式(16)
     (6)    满足则跳过本轮通信,本地梯度累计
     (7)    参数梯度更新:$ \nabla {F_m}(\theta (k)) \leftarrow \nabla {F_m}(\theta (k - 1)) $
     (8)    不满足上传参数梯度$ \nabla {F_m}(\theta (k - 1)) $至边缘服务器端
     (9)    end for
     (10)  end for
    下载: 导出CSV

    表  1  不同$ \alpha $取值下的模型检测准确率及压缩率

    $\alpha $压缩前平均
    通信次数
    压缩后平均
    通信次数
    模型测试平均
    准确率
    压缩率(%)
    0.1400320.91758.00
    0.24002580.929864.50
    0.34002700.930167.50
    0.44002950.931473.75
    0.54003280.933582.00
    0.64003420.934185.50
    0.74003510.933687.75
    0.84003650.935291.25
    0.94003740.935193.75
    1.04004000.9349100.00
    下载: 导出CSV

    表  2  不同α, β下各算法性能对比

    实验验证指标LAGEAFLMALAG
    Acc(Train set)0.88900.93680.9342
    CR (%)5.11008.77008.0000
    CCI($ {\beta _1} = 0.4,{\beta _2} = 0.6 $)0.92740.92060.9318
    CCI($ {\beta _1} = 0.5,{\beta _2} = 0.5 $)0.92200.92260.9331
    CCI($ {\beta _1} = 0.6,{\beta _2} = 0.4 $)0.91670.92470.9315
    下载: 导出CSV
  • [1] LI Tian, SAHU A K, TALWALKAR A, et al. Federated learning: Challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50–60. doi: 10.1109/MSP.2020.2975749
    [2] LUO Siqi, CHEN Xu, WU Qiong, et al. HFEL: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning[J]. IEEE Transactions on Wireless Communications, 2020, 19(10): 6535–6548. doi: 10.1109/TWC.2020.3003744
    [3] HUANG Liang, FENG Xu, FENG Anqi, et al. Distributed deep learning-based offloading for mobile edge computing networks[J]. Mobile Networks and Applications, 2022, 27: 1123–1130. doi: 10.1007/s11036-018-1177-x
    [4] 赵英, 王丽宝, 陈骏君, 等. 基于联邦学习的网络异常检测[J]. 北京化工大学学报:自然科学版, 2021, 48(2): 92–99. doi: 10.13543/j.bhxbzr.2021.02.012

    ZHAO Ying, WANG Libao, CHEN Junjun, et al. Network anomaly detection based on federated learning[J]. Journal of Beijing University of Chemical Technology:Natural Science, 2021, 48(2): 92–99. doi: 10.13543/j.bhxbzr.2021.02.012
    [5] 周传鑫, 孙奕, 汪德刚, 等. 联邦学习研究综述[J]. 网络与信息安全学报, 2021, 7(5): 77–92. doi: 10.11959/j.issn.2096-109x.2021056

    ZHOU Chuanxin, SUN Yi, WANG Degang, et al. Survey of federated learning research[J]. Chinese Journal of Network and Information Security, 2021, 7(5): 77–92. doi: 10.11959/j.issn.2096-109x.2021056
    [6] SHI Weisong, CAO Jie, ZHANG Quan, et al. Edge computing: Vision and challenges[J]. IEEE Internet of Things Journal, 2016, 3(5): 637–646. doi: 10.1109/JIOT.2016.2579198
    [7] WANG Shiqiang, TUOR T, SALONIDIS T, et al. Adaptive federated learning in resource constrained edge computing systems[J]. IEEE Journal on Selected Areas in Communications, 2019, 37(6): 1205–1221. doi: 10.1109/JSAC.2019.2904348
    [8] ABESHU A and CHILAMKURTI N. Deep learning: The frontier for distributed attack detection in fog-to-things computing[J]. IEEE Communications Magazine, 2018, 56(2): 169–175. doi: 10.1109/MCOM.2018.1700332
    [9] LIU Lumin, ZHANG Jun, SONG Shenghui, et al. Client-edge-cloud hierarchical federated learning[C]. ICC 2020 - 2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 2020: 1–6.
    [10] SATTLER F, WIEDEMANN S, MÜLLER K R, et al. Robust and communication-efficient federated learning from Non-i. i. d. data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 31(9): 3400–3413. doi: 10.1109/TNNLS.2019.2944481
    [11] SUN Jun, CHEN Tianyi, GIANNAKIS G B, et al. Lazily aggregated quantized gradient innovation for communication-efficient federated learning[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(4): 2031–2044. doi: 10.1109/TPAMI.2020.3033286
    [12] CHEN Tianyi, SUN Yuejiao, and YIN Wotao. Communication-adaptive stochastic gradient methods for distributed learning[J]. IEEE Transactions on Signal Processing, 2021, 69: 4637–4651. doi: 10.1109/TSP.2021.3099977
    [13] LU Xiaofeng, LIAO Yuying, LIO P, et al. Privacy-preserving asynchronous federated learning mechanism for edge network computing[J]. IEEE Access, 2020, 8: 48970–48981. doi: 10.1109/ACCESS.2020.2978082
    [14] MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]. The 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA, 2016: 1273–1282.
  • 加载中
图(7) / 表(4)
计量
  • 文章访问数:  1031
  • HTML全文浏览量:  1060
  • PDF下载量:  319
  • 被引次数: 0
出版历程
  • 收稿日期:  2021-11-12
  • 修回日期:  2022-04-22
  • 网络出版日期:  2022-04-28
  • 刊出日期:  2023-01-17

目录

    /

    返回文章
    返回