高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于合作博弈和知识蒸馏的个性化联邦学习算法

孙艳华 史亚会 李萌 杨睿哲 司鹏搏

孙艳华, 史亚会, 李萌, 杨睿哲, 司鹏搏. 基于合作博弈和知识蒸馏的个性化联邦学习算法[J]. 电子与信息学报. doi: 10.11999/JEIT221203
引用本文: 孙艳华, 史亚会, 李萌, 杨睿哲, 司鹏搏. 基于合作博弈和知识蒸馏的个性化联邦学习算法[J]. 电子与信息学报. doi: 10.11999/JEIT221203
SUN Yanhua, SHI Yahui, LI Meng, YANG Ruizhe, SI Pengbo. Personalized Federated Learning Method Based on Collation Game and Knowledge Distillation[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT221203
Citation: SUN Yanhua, SHI Yahui, LI Meng, YANG Ruizhe, SI Pengbo. Personalized Federated Learning Method Based on Collation Game and Knowledge Distillation[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT221203

基于合作博弈和知识蒸馏的个性化联邦学习算法

doi: 10.11999/JEIT221203
基金项目: 北京市教委科技计划(KM202010005017)
详细信息
    作者简介:

    孙艳华:女,副教授,研究方向为机器学习、边缘计算等

    史亚会:女,硕士生,研究方向为联邦学习、机器学习

    李萌:男,副教授,研究方向为边缘计算、无线通信网络等

    杨睿哲:女,副教授,研究方向为区块链技术、无线通信网络等

    司鹏搏:男,教授,研究方向为深度强化学习、无线通信网络等

    通讯作者:

    孙艳华 sunyanhua@bjut.edu.cn

  • 中图分类号: TP181

Personalized Federated Learning Method Based on Collation Game and Knowledge Distillation

Funds: Foundation of Beijing Municipal Commission of Education (KM202010005017)
  • 摘要: 为克服联邦学习(FL)客户端数据和模型均需同构的局限性并且提高训练精度,该文提出一种基于合作博弈和知识蒸馏的个性化联邦学习(pFedCK)算法。在该算法中,每个客户端将在公共数据集上训练得到的局部软预测上传到中心服务器并根据余弦相似度从服务器下载最相近的k个软预测形成一个联盟,然后利用合作博弈中的夏普利值来衡量客户端之间多重协作的影响,量化所下载软预测对本地个性化学习效果的累计贡献值,以此确定联盟中每个客户端的最佳聚合系数,从而得到更优的聚合模型。最后采用知识蒸馏将聚合模型的知识迁移到本地模型,并在隐私数据集上进行本地训练。仿真结果表明,与其他算法相比,pFedCK算法可以将个性化精度提升约10%。
  • 图  1  MNIST-EMNIST中4种算法的平均精度

    图  2  CIFAR10-CIFAR100中4种算法的平均精度

    算法1 pFedCK算法
     输入:初始化模型参数${{\boldsymbol{\omega}} ^0} = \left[ {{\boldsymbol{\omega}} _1^0,{\boldsymbol{\omega}} _{^2}^0, \cdots ,{\boldsymbol{\omega}} _n^0} \right]$
     输出: ${\boldsymbol{\omega}} = \left[ { {{\boldsymbol{\omega}} _{^1} },{{\boldsymbol{\omega}} _{^2} }, \cdots ,{{\boldsymbol{\omega}} _{^n} } } \right]$
     Begin:
      (1)初始化客户端模型参数${{\boldsymbol{\omega}} ^0} = \left[ {{\boldsymbol{\omega}} _1^0,{\boldsymbol{\omega}} _{^2}^0, \cdots ,{\boldsymbol{\omega}} _n^0} \right]$
      (2)迁移学习:客户端 i在${{\boldsymbol{D}}_{\rm{p}}}$和${{\boldsymbol{D}}_i}$上训练到收敛
      (3) for t in 全局迭代次数:
      (4)  for i in range(${\boldsymbol{N}}$):
      (5)   在${{\boldsymbol{D}}_{\rm{p}}}$上得到$ logi{t_i} $并上传到中心服务器形成$\left\{ {{\rm{logit}}_i^t} \right\}_{i = 1}^n$,根据余弦相似度
      (6)   下载k个最相近的软预测并形成合作博弈$\left( { { {\left\{ {{\rm{logit}}_j^t} \right\} }_{j \in S_k^t} },v} \right)$
      (7)   for j in ${\boldsymbol{S}}_k^t$:
      (8)    for ${\boldsymbol{X}} \subseteq {\boldsymbol{S}}_k^t$:
      (9)    $\varphi _j^t\left( v \right) = \varphi _j^t\left( v \right) + \dfrac{ {\left( {\left| {\boldsymbol{X} } \right| - 1} \right)!\left( {\left| {\boldsymbol{S} } \right| - \left| {\boldsymbol{X} } \right|} \right)} }{ {\left| {\boldsymbol{S} } \right|!} } \cdot \left[ {v\left( {\boldsymbol{X}} \right) - v\left( {{\boldsymbol{X}}/\left\{ j \right\} } \right)} \right]$
      (10)    $\theta _j^t = \dfrac{ {\max\left( {\varphi _j^t,0} \right)} }{ {\left\| { {\rm{logit} }_i^t - {\rm{logit} }_j^t} \right\|} }$
      (11)    $\theta _j^{t*} = \dfrac{ {\theta _j^t} }{ {\displaystyle\sum\limits_j {\theta _j^t} } }$
      (12)  ${\rm{logit} }_i^{t*} = p \cdot { {\rm{logit} }_i} + q \cdot \displaystyle\sum\limits_j {\theta _j^{t*} \cdot { {\rm{logit} }_j} }$
      (13)  ${ {\boldsymbol{\omega} } _i} = { {\boldsymbol{\omega} } _i} - {\eta _1} \cdot \lambda \cdot { {\boldsymbol{\nabla} } _{ { {\boldsymbol{\omega} } _i} } }{L_{KL} }\left( { {\rm{logit} }_i^{t*},{ {\rm{logit} }_i};{{\boldsymbol{D}}_{\rm{p}}} } \right) - {\eta _2} \cdot {{\boldsymbol{\nabla}} _{ {{\boldsymbol{\omega}} _i} } }{L_i}\left( { {\omega _i};{{\boldsymbol{D}}_i} } \right)$
     End
    下载: 导出CSV

    表  1  4种算法在不同数据集下的个性化精度

    MNIST-EMNISTCIFAR10-CIFAR100
    IIDNon-IID1Non-IID2IIDNon-IID1Non-IID2
    FedMD0.71280.69670.65770.59020.44300.4206
    TopK-FL0.73070.71730.67670.62150.47010.4513
    KT-pFed0.73680.72180.68020.63330.48310.4639
    pFedCK0.78190.74220.71560.68690.50640.4717
    下载: 导出CSV

    表  2  不同本地迭代次数下pFedCK算法的个性化精度

    数据集本地迭代次数
    5101520
    MNIST-EMNISTIID0.75030.78330.76380.7753
    Non-IID10.73790.73100.72160.7422
    Non-IID20.70690.70130.71400.7156
    CIFAR10-CIFAR100IID0.65590.65550.68720.6365
    Non-IID0.46040.48170.46970.5064
    Non-IID20.45580.46870.45130.4717
    下载: 导出CSV

    表  3  不同本地蒸馏次数下pFedCK算法的个性化精度

    数据集本地蒸馏次数
    1235
    MNIST-EMNISTIID0.75950.76790.77720.7861
    Non-IID10.71740.71160.74220.7248
    Non-IID20.70250.70830.71560.7133
    CIFAR10-CIFAR100IID0.65300.64580.68690.6836
    Non-IID10.49420.50780.48370.5024
    Non-IID20.46200.47320.47170.4597
    下载: 导出CSV

    表  4  不同$ p - q $下pFedCK算法的个性化精度

    数据集$ p,q $
    0.1,0.90.2,0.80.3,0.70.4,0.6
    MNIST-EMNISTIID0.78850.78420.77700.7631
    Non-IID10.72140.74070.73780.7435
    Non-IID20.70380.69970.71560.7104
    CIFAR10-CIFAR100IID0.67370.64500.68690.6610
    Non-IID10.49150.47200.50640.4749
    Non-IID20.46630.45320.47170.4750
    下载: 导出CSV

    表  5  不同λ下pFedCK算法的个性化精度

    数据集λ
    0.10.30.50.8
    MNIST-EMNISTIID0.78430.77320.78190.7791
    Non-IID10.74040.74570.74220.7387
    Non-IID20.71670.72030.71560.7052
    CIFAR10-CIFAR100IID0.61060.62630.68690.6843
    Non-IID10.48610.49820.50640.5105
    Non-IID20.46760.45320.47170.4747
    下载: 导出CSV
  • [1] MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA, 2017: 1273–1282.
    [2] LI Tian, SAHU A K, TALWALKAR A, et al. Federated learning: Challenges, methods, and future directions[J]. IEEE Signal Processing Magazine, 2020, 37(3): 50–60. doi: 10.1109/MSP.2020.2975749
    [3] ARIVAZHAGAN M G, AGGARWAL V, SINGH A K, et al. Federated learning with personalization layers[EB/OL]. https://doi.org/10.48550/arXiv.1912.00818, 2019.
    [4] HANZELY F and RICHTÁRIK P. Federated learning of a mixture of global and local models[EB/OL]. https://doi.org/10.48550/arXiv.2002.05516, 2020.
    [5] FALLAH A, MOKHTARI A, and OZDAGLAR A E. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach[C]. Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 3557–3568.
    [6] GHOSH A, CHUNG J, YIN D, et al. An efficient framework for clustered federated learning[C]. Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 19586–19597.
    [7] WU Leijie, GUO Song, DING Yaohong, et al. A coalition formation game approach for personalized federated learning[EB/OL]. https://doi.org/10.48550/arXiv.2202.02502, 2022.
    [8] HINTON G, VINYALS O, and DEAN J. Distilling the knowledge in a neural network[J]. Computer Science, 2015, 14(7): 38–39.
    [9] LI Daliang and WANG Junpu. FedMD: Heterogenous federated learning via model distillation[EB/OL]. https://doi.org/10.48550/arXiv.1910.03581, 2019.
    [10] LIN Tao, KONG Lingjing, STICH S U, et al. Ensemble distillation for robust model fusion in federated learning[C]. Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 2351–2363.
    [11] ZHANG Jie, GUO Song, MA Xiaosong, et al. Parameterized knowledge transfer for personalized federated learning[C]. Proceedings of the 35th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2021: 10092–10104.
    [12] CHO Y J, WANG Jianyu, CHIRUVOLU T, et al. Personalized federated learning for heterogeneous clients with clustered knowledge transfer[EB/OL]. https://doi.org/10.48550/arXiv.2109.08119, 2021.
    [13] DONAHUE K and KLEINBERG J. Model-sharing games: Analyzing federated learning under voluntary participation[C]. Proceedings of the 35th AAAI Conference on Artificial Intelligence, Vancouver, Canada, 2021: 5303–5311.
    [14] LUNDBERG S M and LEE S I. A unified approach to interpreting model predictions[C]. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, USA, 2017: 4768–4777.
    [15] SAAD W, HAN Zhu, DEBBAH M, et al. Coalitional game theory for communication networks[J]. IEEE Signal Processing Magazine, 2009, 26(5): 77–97. doi: 10.1109/MSP.2009.000000
  • 加载中
图(2) / 表(6)
计量
  • 文章访问数:  483
  • HTML全文浏览量:  493
  • PDF下载量:  122
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-09-20
  • 修回日期:  2022-12-27
  • 网络出版日期:  2022-12-28

目录

    /

    返回文章
    返回