高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

融合多尺度频域适配器和双路注意力的时序预测

杨真真 徐奕 万成业 杨永鹏

杨真真, 徐奕, 万成业, 杨永鹏. 融合多尺度频域适配器和双路注意力的时序预测[J]. 电子与信息学报. doi: 10.11999/JEIT251188
引用本文: 杨真真, 徐奕, 万成业, 杨永鹏. 融合多尺度频域适配器和双路注意力的时序预测[J]. 电子与信息学报. doi: 10.11999/JEIT251188
YANG Zhenzhen, XU Yi, WANG Chengye, YANG Yongpeng. Multi-scale Frequency Adapter and Dual-path Attention for Time Series Forecasting[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT251188
Citation: YANG Zhenzhen, XU Yi, WANG Chengye, YANG Yongpeng. Multi-scale Frequency Adapter and Dual-path Attention for Time Series Forecasting[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT251188

融合多尺度频域适配器和双路注意力的时序预测

doi: 10.11999/JEIT251188 cstr: 32379.14.JEIT251188
基金项目: 国家自然科学基金(62571269),江苏省研究生科研与实践创新计划项目( KYCX_241125, SJCX_240279)
详细信息
    作者简介:

    杨真真:女,南京邮电大学副教授、博士,研究方向为深度学习及其应用等

    徐奕:女,南京邮电大学理学院研究生,研究方向为深度学习、时序预测

    万成业:男,南京邮电大学理学院研究生,研究方向为深度学习、时序预测

    杨永鹏:男,南京信息职业技术学院讲师、博士,研究方向为深度学习及其应用等

    通讯作者:

    杨真真 yangzz@njupt.edu.cn

  • 中图分类号: TP391

Multi-scale Frequency Adapter and Dual-path Attention for Time Series Forecasting

Funds: the National Natural Science Foundation of China (No. 62571269), and the Postgraduate Research & Practice Innovation Program of Jiangsu Province (Nos. KYCX24_1125, SJCX24_0279)
  • 摘要: 现有的主流时序预测方法在多尺度建模与频域特征提取方面,难以协同应对数据中复杂的周期性模式与局部动态变化,导致无法充分捕获关键时序特性。针对此问题,提出了一种基于多尺度频域适配器和双路注意力(Multi-scale Frequency Adapter and Dual-path Attention, MFADA)的时序预测方法。该方法采用多尺度频域适配器(Multi-scale Frequency Adapter, MFA)自适应提取时序数据的关键频率成分,获得其全局周期性先验。此外,还通过多尺度双路注意力(Multi-scale Dual-path Attention, MDA)机制,将频域先验嵌入时序与特征两条路径,实现跨粒度的动态协同建模,以增强对时序数据复杂演化规律的刻画能力。实验结果表明,提出的MFADA在8个公开时序数据集上显著超越现有主流预测方法,在预测精度与计算效率方面均取得优异表现,验证了提出的“频域引导—时域协同”框架的有效性和优越性,为复杂时序任务提供了新思路和解决方案。
  • 图  1  整体框架

    图  2  复杂度实验

    图  3  ECL数据集上预测结果可视化

    图  4  ETTh2数据集上预测结果可视化

    表  1  对比实验结果

    MFADAFredformerPeri-midFormeriTransformerTFformerPatchTSTMSGNetTimesNetTCMDLinear
    MSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAEMSEMAE
    ECL960.1390.2340.1470.2410.1410.2350.1480.2390.1510.2510.1810.2700.1650.2740.1680.2720.1530.2530.1970.282
    1920.1520.2480.1630.2570.1570.2490.1670.2580.1650.2640.1880.2740.1840.2920.1840.2890.1710.2690.1960.285
    3360.1660.2660.1800.2760.1740.2670.1790.2720.1800.2780.2040.2930.1950.3020.1980.3000.1830.2830.2090.301
    7200.1960.2920.2130.3020.2050.2960.2110.3000.2130.3020.2460.3240.2310.3320.2200.3200.2170.3110.2450.333
    Avg0.1630.2610.1760.2690.1690.2620.1760.2670.1770.2740.2050.290.1940.3000.1920.2950.1810.2790.2120.300
    Weather960.1530.2010.1600.2050.1720.2180.1760.2160.1720.2200.1770.2180.1630.2120.1720.220.1530.2020.1960.255
    1920.2050.2480.2080.2490.2170.2560.2250.2570.2190.2590.2250.2590.2120.2540.2190.2610.2030.2490.2370.296
    3360.2630.2900.2650.2910.2760.2980.2810.2990.2750.2980.2780.2970.2720.2990.280.3060.2630.2940.2830.335
    7200.3400.3400.3430.3410.3560.3490.3580.3500.3500.3470.3540.3480.3500.3480.3650.3590.3440.3450.3450.381
    Avg0.2400.2700.2440.2720.2550.2800.2600.2800.2540.2810.2590.2810.2460.2780.2590.2870.2410.2730.2650.317
    ETTm1960.3210.3620.3280.3630.3310.3680.3420.3770.3340.3700.3290.3670.3190.3660.3380.3750.3110.3520.3450.372
    1920.3540.3780.3670.3820.3720.3900.3830.3960.3730.3900.3670.3850.3760.3970.3740.3870.3680.3840.380.389
    3360.3840.4000.3950.4030.4110.4200.4180.4180.4050.4170.3990.4100.4170.4220.4100.4110.3950.4020.4130.413
    7200.4490.4380.4540.4400.4720.4530.4870.4570.4710.4530.4540.4390.4810.4580.4780.4500.4620.4400.4740.453
    Avg0.3770.3950.3860.3970.3970.4080.4080.4120.3960.4080.3870.4000.3980.4110.4000.4060.3840.3950.4030.407
    ETTm2960.1770.2600.1780.2610.1780.2600.1860.2720.1760.2610.1750.2590.2470.3070.1870.2670.1730.2580.1930.292
    1920.2410.3000.2440.3030.2480.3060.2540.3140.2450.3050.2410.3020.3120.3460.2490.3090.2460.3060.2840.362
    3360.2990.3390.3020.3410.3080.3420.3160.3510.3040.3420.3050.3430.3140.3480.3210.3510.3020.3410.3690.427
    7200.3940.3950.3970.3960.4190.4040.4140.4070.4000.3981.7301.0420.4140.4030.4080.5220.4060.4000.4210.415
    Avg0.2780.3230.2800.3250.2880.3280.2920.3360.2810.3270.6130.4870.3220.3510.3580.4040.2820.3260.3500.401
    ETTh1960.3670.3920.3760.3940.3800.4000.3870.4050.3700.3940.4140.4190.3890.4110.3840.4020.3740.3950.3760.400
    1920.4310.4240.4390.4250.4330.4320.4410.4360.4320.4250.4600.4450.4420.4180.4360.4290.4360.4210.420.432
    3360.4720.4390.4730.4400.4800.4530.4910.4620.4750.4430.5010.4660.4800.4680.4910.4690.4750.4420.4810.459
    7200.4790.4610.4900.4660.5470.5110.5090.4940.4810.4630.5000.4880.4940.4880.5210.5000.4760.4630.4780.453
    Avg0.4370.4290.4450.4320.4600.4490.4570.4490.4400.4310.4690.4540.4510.4460.4580.4500.4400.4300.4330.447
    ETTh2960.2900.3410.2930.3430.2960.3420.3010.3500.2940.3440.3020.3480.3280.3710.340.3740.2940.3460.3330.387
    1920.3640.3880.3700.3900.3920.4060.3800.3990.3750.3910.3880.4000.4020.4140.4020.4140.3830.3990.4770.476
    3360.3790.4060.3850.4130.4280.4340.4240.4320.3880.4150.4260.4330.4350.4430.4120.4240.4130.4240.5940.541
    7200.4070.4310.4190.4390.4790.4700.4300.4470.4230.4400.4310.4460.4170.4410.4620.4680.4270.4400.8310.657
    Avg0.3600.3910.3670.3960.3990.4130.3840.4070.3700.3980.3870.4070.3960.4170.4140.4270.3790.4020.5590.515
    Solar-
    Energy
    960.1910.2250.1950.2510.1980.2510.2080.2380.1970.2520.2340.2860.2280.2630.250.2920.3120.3990.2900.378
    1920.2210.2510.2270.2590.2370.2590.2400.2640.2280.2600.2670.310.2480.2750.2960.3180.3390.4160.3200.398
    3360.2520.2870.2470.2750.2480.2760.2490.2740.2530.2870.2900.3150.2910.3010.3190.3300.3680.4300.3530.415
    7200.2450.2810.2530.2830.2610.2830.2500.2750.2520.2830.2890.3170.2910.3060.3380.3370.3700.4250.3560.413
    Avg0.2280.2610.2300.2670.2360.2670.2370.2630.2330.2710.2700.3070.2650.2860.3010.3190.3470.4170.3300.400
    Traffic960.4210.2900.4040.2740.4350.2940.3920.2680.5250.3420.4620.2950.5940.3360.5930.3210.5080.3420.650.396
    1920.4430.2920.4270.2880.4510.2990.4130.2770.5140.3460.4660.2960.6150.3470.6170.3360.6090.3870.5980.370
    3360.4640.3200.4400.2940.4630.3020.4250.2830.5310.3570.4820.3040.6230.3510.6290.3360.6400.4020.6050.373
    7200.4870.3290.4660.3070.4960.3210.4600.3010.5690.3730.5140.3220.6080.3430.6400.350.7150.4420.6450.394
    Avg0.4540.3080.4340.2910.4610.3040.4220.2820.5350.3550.4810.3040.6100.3440.6200.3360.6180.3930.6250.383
    下载: 导出CSV

    表  2  消融实验结果

    模型ECLWeatherETTh1ETTh2
    MSEMAEMSEMAEMSEMAEMSEMAE
    Fredformer0.1760.2690.2440.2720.4450.4320.3670.396
    w/o MFA0.1720.2660.2420.2720.4390.4310.3610.392
    Re MDA0.1700.2650.2430.2710.4400.4300.3630.394
    MFADA0.1630.2610.2400.2700.4370.4290.3600.391
    下载: 导出CSV
  • [1] KONG Xiangjie, CHEN Zhenghao, LIU Weiyao, et al. Deep learning for time series forecasting: A survey[J]. International Journal of Machine Learning and Cybernetics, 2025, 16(5): 5079–5112. doi: 10.1007/s13042-025-02560-w.
    [2] ZHONG Weiyi, ZHAI Dengshuai, XU Wenran, et al. Accurate and efficient daily carbon emission forecasting based on improved ARIMA[J]. Applied Energy, 2024, 376: 124232. doi: 10.1016/j.apenergy.2024.124232.
    [3] 潘金伟, 王乙乔, 钟博, 等. 基于统计特征搜索的多元时间序列预测方法[J]. 电子与信息学报, 2024, 46(8): 3276–3284. doi: 10.11999/JEIT231264.

    PAN Jinwei, WANG Yiqiao, ZHONG Bo, et al. Statistical feature-based search for multivariate time series forecasting[J]. Journal of Electronics & Information Technology, 2024, 46(8): 3276–3284. doi: 10.11999/JEIT231264.
    [4] DA SILVA D G and DE MOURA MENESES A A M. Comparing long short-term memory (LSTM) and bidirectional LSTM deep neural networks for power consumption prediction[J]. Energy Reports, 2023, 10: 3315–3334. doi: 10.1016/j.egyr.2023.09.175.
    [5] 郑庆河, 李秉霖, 于治国, 等. 深度学习使能的自动调制分类技术研究进展[J]. 电子与信息学报, 2025, 47(11): 4096–4111. doi: 10.11999/JEIT250674.

    ZHENG Qinghe, LI Binglin, YU Zhiguo, et al. Research progress of deep learning enabled automatic modulation classification technology[J]. Journal of Electronics & Information Technology, 2025, 47(11): 4096–4111. doi: 10.11999/JEIT250674.
    [6] 刘辉, 冯浩然, 马佳妮, 等. 融合空间自注意力感知的严重缺失多元时间序列插补算法[J]. 电子与信息学报, 2025, 47(10): 3917–3928. doi: 10.11999/JEIT250220.

    LIU Hui, FENG Haoran, MA Jiani, et al. Spatial self-attention incorporated imputation algorithm for severely missing multivariate time series[J]. Journal of Electronics & Information Technology, 2025, 47(10): 3917–3928. doi: 10.11999/JEIT250220.
    [7] RABBANI M B A, MUSARAT M A, ALALOUL W S, et al. A comparison between seasonal autoregressive integrated moving average (SARIMA) and exponential smoothing (ES) based on time series model for forecasting road accidents[J]. Arabian Journal for Science and Engineering, 2021, 46(11): 11113–11138. doi: 10.1007/s13369-021-05650-3.
    [8] WU Haixu, HU Tengge, LIU Yong, et al. TimesNet: Temporal 2D-variation modeling for general time series analysis[C]. Proceedings of the 11th International Conference on Learning Representations, Kigali, Rwanda, 2023. (查阅网上资料, 未找到页码和doi信息, 请确认).
    [9] COUTINHO E R, MADEIRA J G F, BORGES D G F, et al. Multi-step forecasting of meteorological time series using CNN-LSTM with decomposition methods[J]. Water Resources Management, 2025, 39(7): 3173–3198. doi: 10.1007/s11269-025-04102-z.
    [10] CAI Wanlin, LIANG Yuxuan, LIU Xianggen, et al. MSGNet: Learning multi-scale inter-series correlations for multivariate time series forecasting[C]. Proceedings of the 38th AAAI Conference on Artificial Intelligence, Vancouver, Canada, 2024: 11141–11149. doi: 10.1609/aaai.v38i10.28991.
    [11] YUNITA A, PRATAMA M H D I, ALMUZAKKI M Z, et al. Performance analysis of neural network architectures for time series forecasting: A comparative study of RNN, LSTM, GRU, and hybrid models[J]. MethodsX, 2025, 15: 103462. doi: 10.1016/j.mex.2024.103462.
    [12] YADAV H and THAKKAR A. NOA-LSTM: An efficient LSTM cell architecture for time series forecasting[J]. Expert Systems with Applications, 2024, 238: 122333. doi: 10.1016/j.eswa.2023.122333.
    [13] UBAL C, DI-GIORGI G, CONTRERAS-REYES J E, et al. Predicting the long-term dependencies in time series using recurrent artificial neural networks[J]. Machine Learning and Knowledge Extraction, 2023, 5(4): 1340–1358. doi: 10.3390/make5040068.
    [14] ZENG Ailing, CHEN Muxi, ZHANG Lei, et al. Are transformers effective for time series forecasting?[C]. Proceedings of the 37th AAAI Conference on Artificial Intelligence, Washington, USA, 2023: 11121–11128. doi: 10.1609/aaai.v37i9.26317.
    [15] JIANG Hongwei, LIU Dongsheng, DING Xinyi, et al. TCM: An efficient lightweight MLP-based network with affine transformation for long-term time series forecasting[J]. Neurocomputing, 2025, 617: 128960. doi: 10.1016/j.neucom.2024.128960.
    [16] ZHOU Haoyi, ZHANG Shanghang, PENG Jieqi, et al. Informer: Beyond efficient transformer for long sequence time-series forecasting[C]. Proceedings of the 35th AAAI Conference on Artificial Intelligence, Palo Alto, USA, 2021: 11106–11115. doi: 10.1609/aaai.v35i12.17325. (查阅网上资料,未找到出版地信息,请确认).
    [17] WU Haixu, XU Jiehui, WANG Jianmin, et al. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting[C]. Proceedings of the 35th Conference on Neural Information Processing Systems, Red Hook, USA, 2021: 22419–22430. (查阅网上资料, 未找到出版地和doi信息, 请确认).
    [18] ZHOU Tian, MA Ziqing, WEN Qingsong, et al. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting[C]. Proceedings of the International Conference on Machine Learning, Baltimore, USA, 2022: 27268–27286.
    [19] NIE Yuqi, NGUYEN N H, SINTHONG P, et al. A time series is worth 64 words: Long-term forecasting with transformers[C]. Proceedings of the 11th International Conference on Learning Representations, Kigali, Rwanda, 2023. (查阅网上资料, 未找到页码和doi信息, 请确认).
    [20] WU Qiang, YAO Gechang, FENG Zhixi, et al. Peri-midFormer: Periodic pyramid transformer for time series analysis[C]. Proceedings of the 38th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2024: 415. doi: 10.52202/079017-0415.
    [21] LIU Yong, HU Tengge, ZHANG Haoran, et al. iTransformer: Inverted transformers are effective for time series forecasting[C]. Proceedings of the 12th International Conference on Learning Representations, Vienna, Austria, 2024. (查阅网上资料, 未找到页码和doi信息, 请确认).
    [22] ZHAO Tianlong, FANG Lexin, MA Xiang, et al. TFformer: A time-frequency domain bidirectional sequence-level attention based transformer for interpretable long-term sequence forecasting[J]. Pattern Recognition, 2025, 158: 110994. doi: 10.1016/j.patcog.2024.110994.
    [23] ZHOU Tian, NIU Peisong, WANG Xue, et al. One fits all: Power general time series analysis by pretrained LM[C]. Proceedings of the 37th International Conference on Neural Information Processing Systems, New Orleans, USA, 2023: 1877.
    [24] PIAO Xihao, CHEN Zheng, MURAYAMA T, et al. Fredformer: Frequency debiased transformer for time series forecasting[C]. Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 2024: 2400–2410. doi: 10.1145/3637528.3671928.
    [25] GAO Shixuan, ZHANG Pingping, YAN Tianyu, et al. Multi-scale and detail-enhanced segment anything model for salient object detection[C]. Proceedings of the 32nd ACM International Conference on Multimedia, Melbourne, Australia, 2024: 9894–9903. doi: 10.1145/3664647.3680650.
    [26] SI Yunzhong, XU Huiying, ZHU Xinzhong, et al. SCSA: Exploring the synergistic effects between spatial and channel attention[J]. Neurocomputing, 2025, 634: 129866. doi: 10.1016/j.neucom.2025.129866.
  • 加载中
图(4) / 表(2)
计量
  • 文章访问数:  7
  • HTML全文浏览量:  6
  • PDF下载量:  2
  • 被引次数: 0
出版历程
  • 修回日期:  2026-01-13
  • 录用日期:  2026-01-13
  • 网络出版日期:  2026-03-06

目录

    /

    返回文章
    返回