高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

面向电力灾害预警场景:基于受限频谱资源的大模型与轻量模型联合部署方案

陈磊 黄在朝 刘川 张围围

陈磊, 黄在朝, 刘川, 张围围. 面向电力灾害预警场景:基于受限频谱资源的大模型与轻量模型联合部署方案[J]. 电子与信息学报. doi: 10.11999/JEIT250321
引用本文: 陈磊, 黄在朝, 刘川, 张围围. 面向电力灾害预警场景:基于受限频谱资源的大模型与轻量模型联合部署方案[J]. 电子与信息学报. doi: 10.11999/JEIT250321
CHEN Lei, HUANG Zaichao, LIU Chuan, ZHANG Weiwei. For Electric Power Disaster Early Warning Scenarios: A Large Model and Lightweight Models Joint Deployment Scheme Based on Limited Spectrum Resources[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250321
Citation: CHEN Lei, HUANG Zaichao, LIU Chuan, ZHANG Weiwei. For Electric Power Disaster Early Warning Scenarios: A Large Model and Lightweight Models Joint Deployment Scheme Based on Limited Spectrum Resources[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250321

面向电力灾害预警场景:基于受限频谱资源的大模型与轻量模型联合部署方案

doi: 10.11999/JEIT250321 cstr: 32379.14.JEIT250321
基金项目: 国家电网公司总部科技项目(5700-202441329A-2-1-ZX)
详细信息
    作者简介:

    陈磊:男,高级工程师,研究方向为电力系统通信、网络智能控制

    黄在朝:男,高级工程师,研究方向为电力系统通信、网络智能控制

    刘川:男,教授级高级工程师,研究方向为电力系统通信、网络智能控制、能量信息协同

    张围围:女,高级工程师,研究方向为信息通信、智能运检

    通讯作者:

    刘川 liuchuan1@epri.sgcc.com.cm

  • 11)考虑到功率损耗随距离指数增长,在分配功率层级时优先把高功率层级分配给距离基站较近的终端,而把低功率层级分配给距离基站较远的终端。
  • 中图分类号: TP18l TN929.5

For Electric Power Disaster Early Warning Scenarios: A Large Model and Lightweight Models Joint Deployment Scheme Based on Limited Spectrum Resources

Funds: Science and Technology Project of the State Grid Corporation of China (5700-202441329A-2-1-ZX)
  • 摘要: 面向电力灾害预警场景,传统针对不同场景设计专有、独立预警系统的方式存在数据采集冗余和开发成本高昂问题。为提高预警精度并降低成本,基于AI大模型的综合预警系统是未来研究的主要方向之一,但大模型通常需要部署在云侧,而无线频谱资源限制使得所有数据上传至云侧面临挑战。通过将模型规模大幅压缩获得轻量模型并在端侧部署,可绕过频谱资源受限瓶颈,但这不可避免地会降低模型性能。为此,该文提出一种基于云-端协同的大模型与轻量模型联合部署方案:在云侧部署高精度大模型处理复杂任务,在端侧部署轻量模型处理简单任务,并通过可信阈值实现任务分流;在此基础上,该文引入功率域非正交多址技术,使得多个终端可共享同一时频资源,进而通过增加云侧处理任务比例提高系统检测精度;然后针对仅考虑上行共享信道带宽约束场景、同时考虑终端接入碰撞约束与共享信道带宽约束场景,分别设计给定带宽时系统可支持的最大终端数量求解算法和检测准确率最优的可信阈值求解算法。数值结果表明,所提方案在系统可支持终端数量、检测精度方面显著优于多种对比方案,验证了所提方案的有效性和优越性。
  • 图  1  云-端协同的大模型与轻量模型联合部署方案示意图

    图  2  可信阈值对检测准确率和漏报率的影响

    图  3  上行共享信道带宽对系统可支持终端数量影响

    图  4  检测准确率最优的可信阈值求解算法收敛图

    表  1  数学符号表

    符号 含义 符号 含义
    $ \mathcal{S} $ 基站集合 $ \mathcal{M} $ 终端集合
    $ {\mathcal{M}}_{s} $ 基站s覆盖区域内终端集合 $ \alpha $ 典型数据概率
    $ {P}_{\mathrm{a}\mathrm{c}\mathrm{c},s,m}^{\left(\tau \right)} $ 终端m本地检测准确率 $ {P}_{\mathrm{a}\mathrm{c}\mathrm{c},c}^{\left(\tau \right)} $ 云服务器检测准确率
    $ W $ 上行共享信道总带宽 $ \mathrm{\beta } $ 异常数据概率
    $ \mathrm{\mathit{W}}_{\mathrm{s}\mathrm{u}\mathrm{b}} $ 子信道带宽 $ {\mu }_{s,m}^{\left(\tau \right)} $ 本地轻量模型检测指示变量
    $ {\varGamma }_{\mathrm{P}} $ 可信阈值 $ {\nu }_{s,m}^{\left(\tau \right)} $ 异常数据指示变量
    $ {\varGamma }_{\mathrm{a}\mathrm{c}\mathrm{c}} $ 最小检测准确率阈值 $ {\varGamma }_{\mathrm{m}\mathrm{i}\mathrm{s}\mathrm{s}} $ 最大漏报率阈值
    $ {\xi }_{s,m}^{\left(\tau \right)} $ 终端接入碰撞指示变量 $ K $ 前导码数量
    下载: 导出CSV

    表  2  仿真参数设置

    参数 参数
    基站数量$ {S} $ 3 典型数据概率$ \mathrm{\alpha } $ {0.9,0.8}
    子信道带宽$ {{W}}_{\mathrm{s}\mathrm{u}\mathrm{b}} $ 2 MHz 异常数据概率$ \mathrm{\beta } $ 0.1
    $ {\mathrm{L}\mathrm{B}}_{1}^{\mathrm{d}\mathrm{e}\mathrm{v}},{\mathrm{U}\mathrm{B}}_{1}^{\mathrm{d}\mathrm{e}\mathrm{v}},{\mathrm{L}\mathrm{B}}_{2}^{\mathrm{d}\mathrm{e}\mathrm{v}},{\mathrm{U}\mathrm{B}}_{2}^{\mathrm{d}\mathrm{e}\mathrm{v}} $ 0.96,0.98,0.90,0.96 精度$ {{\varepsilon }}_{\mathrm{M}} $ 1
    $ {\mathrm{L}\mathrm{B}}_{1}^{\mathrm{c}\mathrm{l}\mathrm{o}},{\mathrm{U}\mathrm{B}}_{1}^{\mathrm{c}\mathrm{l}\mathrm{o}},{\mathrm{L}\mathrm{B}}_{2}^{\mathrm{c}\mathrm{l}\mathrm{o}},{\mathrm{U}\mathrm{B}}_{2}^{\mathrm{c}\mathrm{l}\mathrm{o}} $ 0.995,1.000,0.990,0.995 粒子数$ {N} $ 20
    惯性权重$ \mathrm{\omega } $ 0.5 学习因子$ {{c}}_{1},{{c}}_{2} $ 2,2
    仿真周期数$ {\mathrm{\tau }}_{\mathrm{m}\mathrm{a}\mathrm{x}} $ 1000 最大迭代次数$ I $ 200
    下载: 导出CSV

    表  3  前导码数量对系统性能影响

    前导码
    数量
    所提方案 基于OMA的
    方案
    纯端侧处理
    方案
    可信阈值
    随机方案
    所提方案 基于OMA的
    方案
    纯端侧处理
    方案
    可信阈值
    随机方案
    适应值($ {{\varGamma }}_{\mathrm{m}\mathrm{i}\mathrm{s}\mathrm{s}}=0.01 $) 适应值($ {{\varGamma }}_{\mathrm{m}\mathrm{i}\mathrm{s}\mathrm{s}}=0.02 $)
    10 0.0028 0.0028 0.0206 0.0183 0.9657 0.9657 0.0106 0.0691
    20 0.0016 0.0016 0.0206 0.0171 0.9679 0.9679 0.0106 0.1190
    30 0.0001 0.0026 0.0206 0.0162 0.9674 0.9667 0.0106 0.1005
    40 0.9666 0.0004 0.0206 0.0155 0.9725 0.9667 0.0106 0.1597
    50 0.9701 0.9653 0.0206 0.0149 0.9796 0.9677 0.0106 0.1992
    60 0.9767 0.9617 0.0206 0.0144 0.9816 0.9687 0.0106 0.1707
    70 0.9793 0.9663 0.0206 0.0142 0.9811 0.9677 0.0106 0.2293
    80 0.9794 0.9611 0.0206 0.0138 0.9826 0.9693 0.0106 0.2979
    90 0.9826 0.9659 0.0206 0.0137 0.9823 0.9680 0.0106 0.2884
    100 0.9817 0.9626 0.0206 0.0132 0.9820 0.9689 0.0106 0.2791
    下载: 导出CSV

    表  4  终端数量与前导码数量比值对终端接入碰撞概率影响

    前导码数量终端数量比值碰撞概率
    20100.5:10.3698
    20201:10.6226
    20402:10.8647
    20603:10.9515
    20804:10.9826
    201005:10.9938
    201206:10.9978
    201407:10.9992
    201608:10.9997
    201809:10.9999
    2020010:1≈1
    下载: 导出CSV
  • [1] 高建丰, 王焱, 金卷华. 基于QPSO-BP神经网络的火灾预警算法[J]. 消防科学与技术, 2020, 39(10): 1345–1349. doi: 10.3969/j.issn.1009-0029.2020.10.004.

    GAO Jianfeng, WANG Yan, and JIN Juanhua. Fire early warning algorithm based on QPSO-BP neural network[J]. Fire Science and Technology, 2020, 39(10): 1345–1349. doi: 10.3969/j.issn.1009-0029.2020.10.004.
    [2] ABDALZAHER M S, ELSAYED H A, FOUDA M M, et al. Employing machine learning and IoT for earthquake early warning system in smart cities[J]. Energies, 2023, 16(1): 495. doi: 10.3390/en16010495.
    [3] 窦杰, 向子林, 许强, 等. 机器学习在滑坡智能防灾减灾中的应用与发展趋势[J]. 地球科学, 2023, 48(5): 1657–1674. doi: 10.3799/dqkx.2022.419.

    DOU Jie, XIANG Zilin, XU Qiang, et al. Application and development trend of machine learning in landslide intelligent disaster prevention and mitigation[J]. Earth Science, 2023, 48(5): 1657–1674. doi: 10.3799/dqkx.2022.419.
    [4] SANDERSON K. GPT-4 is here: What scientists think[J]. Nature, 2023, 615(7954): 773. doi: 10.1038/d41586-023-00816-5.
    [5] 李刚, 方鸿, 刘云鹏, 等. 新型电力系统中的大模型驱动技术: 现状、机遇与挑战[J]. 高电压技术, 2024, 50(7): 2864–2878. doi: 10.13336/j.1003-6520.hve.20240863.

    LI Gang, FANG Hong, LIU Yunpeng, et al. Large-model drive technology in new power system: Status, challenges and prospects[J]. High Voltage Engineering, 2024, 50(7): 2864–2878. doi: 10.13336/j.1003-6520.hve.20240863.
    [6] 车万翔, 窦志成, 冯岩松, 等. 大模型时代的自然语言处理: 挑战、机遇与发展[J]. 中国科学: 信息科学, 2023, 53(9): 1645–1687. doi: 10.1360/SSI-2023-0113.

    CHE Wanxiang, DOU Zhicheng, FENG Yansong, et al. Towards a comprehensive understanding of the impact of large language models on natural language processing: Challenges, opportunities and future directions[J]. Scientia Sinica Informationis, 2023, 53(9): 1645–1687. doi: 10.1360/SSI-2023-0113.
    [7] 李莉, 时榕良, 郭旭, 等. 融合大模型与图神经网络的电力设备缺陷诊断[J]. 计算机科学与探索, 2024, 18(10): 2643–2655. doi: 10.3778/j.issn.1673-9418.2405085.

    LI Li, SHI Rongliang, GUO Xu, et al. Diagnosis of power system defects by large language models and graph neural networks[J]. Journal of Frontiers of Computer Science and Technology, 2024, 18(10): 2643–2655. doi: 10.3778/j.issn.1673-9418.2405085.
    [8] DETTMERS T, LEWIS M, BELKADA Y, et al. GPT3. int8(): 8-bit matrix multiplication for transformers at scale[C]. The 36th Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 30318–30332.
    [9] XIAO Guangxuan, LIN Ji, SEZNEC M, et al. SmoothQuant: Accurate and efficient post-training quantization for large language models[C]. The 40th International Conference on Machine Learning, Honolulu, USA, 2023: 38087–38099.
    [10] XIA Haojun, ZHENG Zhen, LI Yuchao, et al. Flash-LLM: Enabling cost-effective and highly-efficient large generative model inference with unstructured sparsity[J]. Proceedings of the VLDB Endowment, 2023, 17(2): 211–224. doi: 10.14778/3626292.3626303.
    [11] HU E J, SHEN Yelong, WALLIS P, et al. LoRA: Low-rank adaptation of large language models[C].The 10th International Conference on Learning Representations, 2022: 3.
    [12] LI Sunzhu, ZHANG Peng, GAN Guobing, et al. Hypoformer: Hybrid decomposition transformer for edge-friendly neural machine translation[C]. 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, United Arab Emirates, 2022: 7056–7068. doi: 10.18653/v1/2022.emnlp-main.475.
    [13] GU Yuxian, DONG Li, WEI Furu, et al. MiniLLM: Knowledge distillation of large language models[C]. The 12th International Conference on Learning Representations, Vienna, Austria, 2024.
    [14] ZHANG Shubin, TONG Xun, CHI Kaikai, et al. Stackelberg game-based multi-agent algorithm for resource allocation and task offloading in MEC-enabled C-ITS[J]. IEEE Transactions on Intelligent Transportation Systems, 2025, doi: 10.1109/TITS.2025.3553487.
    [15] DAI Linglong, WANG Bichai, DING Zhiguo, et al. A survey of non-orthogonal multiple access for 5G[J]. IEEE Communications Surveys & Tutorials, 2018, 20(3): 2294–2323. doi: 10.1109/COMST.2018.2835558.
    [16] SHI Zhenjiang and LIU Jiajia. A novel NOMA-enhanced SDT scheme for NR RedCap in 5G/B5G systems[J]. IEEE Transactions on Wireless Communications, 2024, 23(4): 3190–3204. doi: 10.1109/TWC.2023.3306342.
    [17] BYERS R. A bisection method for measuring the distance of a stable matrix to the unstable matrices[J]. SIAM Journal on Scientific and Statistical Computing, 1988, 9(5): 875–881. doi: 10.1137/0909059.
    [18] KENNEDY J and EBERHART R. Particle swarm optimization[C]. The ICNN'95-International Conference on Neural Networks, Perth, Australia, 1995: 1942–1948. doi: 10.1109/ICNN.1995.488968.
    [19] WANG Yichen, WANG Tao, YANG Zihuan, et al. Throughput-oriented non-orthogonal random access scheme for massive MTC networks[J]. IEEE Transactions on Communications, 2020, 68(3): 1777–1793. doi: 10.1109/TCOMM.2019.2957767.
    [20] SHI Zhenjiang and LIU Jiajia. Massive access in 5G and beyond ultra-dense networks: An MARL-based NORA scheme[J]. IEEE Transactions on Communications, 2023, 71(4): 2170–2183. doi: 10.1109/TCOMM.2023.3244958.
  • 加载中
图(4) / 表(4)
计量
  • 文章访问数:  80
  • HTML全文浏览量:  41
  • PDF下载量:  20
  • 被引次数: 0
出版历程
  • 收稿日期:  2025-04-27
  • 修回日期:  2025-10-22
  • 网络出版日期:  2025-10-27

目录

    /

    返回文章
    返回