Advanced Search
Turn off MathJax
Article Contents
MA Zhenguo, HE Zixuan, SUN Yanjing, WANG Bowen, LIU Jianchun, XU Hongli. Research on Federated Unlearning Approach Based on Adaptive Model Pruning[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250503
Citation: MA Zhenguo, HE Zixuan, SUN Yanjing, WANG Bowen, LIU Jianchun, XU Hongli. Research on Federated Unlearning Approach Based on Adaptive Model Pruning[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250503

Research on Federated Unlearning Approach Based on Adaptive Model Pruning

doi: 10.11999/JEIT250503 cstr: 32379.14.JEIT250503
Funds:  Fundamental Research Funds for the Central Universities (XJ2025012301), Shenzhen Science and Technology Program (KCXFZ20240903094204007)
  • Received Date: 2025-06-03
  • Rev Recd Date: 2025-08-05
  • Available Online: 2025-09-16
  •   Objective  The rapid proliferation of Internet of Things (IoT) devices and the enforcement of data privacy regulations, including the General Data Protection Regulation (GDPR) and the Personal Information Protection Act, have positioned Federated Unlearning (FU) as a critical mechanism to safeguard the “right to be forgotten” in Edge Computing (EC). Existing class-level unlearning approaches often adopt uniform model pruning strategies. However, because edge nodes vary substantially in computational capacity, storage, and network bandwidth, these methods suffer from efficiency degradation, leading to imbalanced training delays and decreased resource utilization. This study proposes FU with Adaptive Model Pruning (FunAMP), a framework that minimizes training time while reliably eliminating the influence of target-class data. FunAMP dynamically assigns pruning ratios according to node resources and incorporates a parameter correlation metric to guide pruning decisions. In doing so, it addresses the challenge of resource heterogeneity while preserving compliance with privacy regulations.  Methods  The proposed framework establishes a quantitative relationship among model training time, node resources, and pruning ratios, on the basis of which an optimization problem is formulated to minimize overall training time. To address this problem, a greedy algorithm (Algorithm 2) is designed to adaptively assign appropriate pruning ratios to each node. The algorithm discretizes the pruning ratio space and applies a binary search strategy to balance computation and communication delays across nodes. Additionally, a Term Frequency–Inverse Document Frequency (TF–IDF)-based metric is introduced to evaluate the correlation between model parameters and the target-class data. For each parameter, the TF score reflects its activation contribution to the target class, whereas the IDF score measures its specificity across all classes. Parameters with high TF–IDF scores are iteratively pruned until the assigned pruning ratio is satisfied, thereby ensuring the effective removal of target-class data.  Results and Discussions  Simulation results confirm the effectiveness of FunAMP in balancing training efficiency and unlearning performance under resource heterogeneity. The effect of pruning granularity on model accuracy (Fig. 2): fine granularity (e.g., 0.01) preserves model integrity, whereas coarse settings degrade accuracy due to excessive parameter removal. Under fixed training time, FunAMP consistently achieves higher accuracy than FunUMP and Retrain (Fig. 3), as adaptive pruning ratios reduce inter-node waiting delays. For instance, FunAMP attains 76.48% accuracy on LeNet and 83.60% on AlexNet with FMNIST, outperforming baseline methods by 5.91% and 4.44%, respectively. The TF–IDF-driven pruning mechanism fully removes contributions of target-class data, achieving 0.00% accuracy on the target data while maintaining competitive performance on the remaining data (Table 2). Robustness under varying heterogeneity levels is further verified (Fig. 4). Compared with baselines, FunAMP markedly reduces the training time required to reach predefined accuracy and delivers up to 11.8× speedup across four models. These results demonstrate FunAMP’s capability to harmonize resource utilization, preserve model performance, and ensure unlearning efficacy in heterogeneous edge environments.  Conclusions  To mitigate training inefficiency caused by resource heterogeneity in FU, this study proposes FunAMP, a framework that integrates adaptive pruning with parameter relevance analysis. A system model is constructed to formalize the relationship among node resources, pruning ratios, and training time. A greedy algorithm dynamically assigns pruning ratios to edge nodes, thereby minimizing global training time while balancing computational and communication delays. Furthermore, a TF–IDF-driven metric quantifies the correlation between model parameters and target-class data, enabling the selective removal of critical parameters to erase target-class contributions. Theoretical analysis verifies the stability and reliability of the framework, while empirical results demonstrate that FunAMP achieves complete removal of target-class data and sustains competitive accuracy on the remaining classes. This work is limited to single-class unlearning, and extending the approach to scenarios requiring the simultaneous removal of multiple classes remains an important direction for future research.
  • loading
  • [1]
    SHI Weisong, CAO Jie, ZHANG Quan, et al. Edge computing: Vision and challenges[J]. IEEE Internet of Things Journal, 2016, 3(5): 637–646. doi: 10.1109/JIOT.2016.2579198.
    [2]
    MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]. The 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, USA, 2017: 1273–1282.
    [3]
    管桂林, 陶政坪, 支婷, 等. 面向医疗场景的去中心化联邦学习隐私保护方法[J]. 计算机应用, 2024, 44(S2): 112–117. doi: 10.11772/j.issn.1001-9081.2024030347.

    GUAN Guilin, TAO Zhengping, ZHI Ting, et al. Decentralized federated learning privacy protection method for medical scenarios[J]. Journal of Computer Applications, 2024, 44(S2): 112–117. doi: 10.11772/j.issn.1001-9081.2024030347.
    [4]
    智慧, 段苗苗, 杨利霞, 等. 一种基于区块链和联邦学习融合的交通流预测方法[J]. 电子与信息学报, 2024, 46(9): 3777–3787. doi: 10.11999/JEIT240030.

    ZHI Hui, DUAN Miaomiao, YANG Lixia, et al. A traffic flow prediction method based on the fusion of blockchain and federated learning[J]. Journal of Electronics & Information Technology, 2024, 46(9): 3777–3787. doi: 10.11999/JEIT240030.
    [5]
    郑润达, 张维, 张永杰, 等. 金融领域中的联邦学习研究[J]. 数智技术研究与应用, 2025, 1(1): 1–17. doi: 10.26917/j.cnki.issn.2097-597X.2025.01.001.

    ZHENG Runda, ZHANG Wei, ZHANG Yongjie, et al. Research on federated learning in the field of finance[J]. SmartTech Innovations, 2025, 1(1): 1–17. doi: 10.26917/j.cnki.issn.2097-597X.2025.01.001.
    [6]
    肖雄, 唐卓, 肖斌, 等. 联邦学习的隐私保护与安全防御研究综述[J]. 计算机学报, 2023, 46(5): 1019–1044. doi: 10.11897/SP.J.1016.2023.01019.

    XIAO Xiong, TANG Zhuo, XIAO Bin, et al. A survey on privacy and security issues in federated learning[J]. Chinese Journal of Computers, 2023, 46(5): 1019–1044. doi: 10.11897/SP.J.1016.2023.01019.
    [7]
    LIU Ziyao, JIANG Yu, SHEN Jiyuan, et al. A survey on federated unlearning: Challenges, methods, and future directions[J]. ACM Computing Surveys, 2025, 57(1): 2. doi: 10.1145/3679014.
    [8]
    WU Leijie, GUO Song, WANG Junxiao, et al. Federated unlearning: Guarantee the right of clients to forget[J]. IEEE Network, 2022, 36(5): 129–135. doi: 10.1109/MNET.001.2200198.
    [9]
    WANG Zichen, GAO Xiangshan, WANG Cong, et al. Efficient vertical federated unlearning via fast retraining[J]. ACM Transactions on Internet Technology, 2024, 24(2): 11. doi: 10.1145/3657290.
    [10]
    SYROS G, YAR G, BOBOILA S, et al. Backdoor attacks in peer-to-peer federated learning[J]. ACM Transactions on Privacy and Security, 2025, 28(1): 8. doi: 10.1145/3691633.
    [11]
    WANG Junxiao, GUO Song, XIE Xin, et al. Federated unlearning via class-discriminative pruning[C]. The ACM Web Conference, Lyon, France, 2022: 622–632. doi: 10.1145/3485447.3512222.
    [12]
    WANG Lun, XU Yang, XU Hongli, et al. BOSE: Block-wise federated learning in heterogeneous edge computing[J]. IEEE/ACM Transactions on Networking, 2024, 32(2): 1362–1377. doi: 10.1109/TNET.2023.3316421.
    [13]
    JIANG Zhida, XU Yang, XU hongli, et al. Computation and communication efficient federated learning with adaptive model pruning[J]. IEEE Transactions on Mobile Computing, 2024, 23(3): 2003–2021. doi: 10.1109/TMC.2023.3247798.
    [14]
    MA Zhenguo, XU Yang, XU Hongli, et al. Adaptive batch size for federated learning in resource-constrained edge computing[J]. IEEE Transactions on Mobile Computing, 2023, 22(1): 37–53. doi: 10.1109/TMC.2021.3075291.
    [15]
    AIZAWA A. An information-theoretic perspective of tf–idf measures[J]. Information Processing & Management, 2003, 39(1): 45–65. doi: 10.1016/S0306-4573(02)00021-3.
    [16]
    XIAO Han, RASUL K, and VOLLGRAF R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms[J]. arXiv preprint arXiv: 1708.07747, 2017. doi: 10.48550/arXiv.1708.07747. (查阅网上资料,请核对文献类型及格式是否正确).
    [17]
    KRIZHEVSKY A, NAIR V, and HINTON G. The CIFAR-100 dataset[EB/OL]. http://www.cs.toronto.edu/~kriz/cifar.html, 2024. (查阅网上资料,未找到本条文献出版或更新年份信息,请确认).
    [18]
    PAN Zibin, WANG Zhichao, LI Chi, et al. Federated unlearning with gradient descent and conflict mitigation[C]. The 39th AAAI Conference on Artificial Intelligence, Philadelphia, USA, 2025: 19804–19812. doi: 10.1609/aaai.v39i19.34181.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(4)  / Tables(3)

    Article Metrics

    Article views (16) PDF downloads(3) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return