高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

卷积神经网络与视觉Transformer联合驱动的跨层多尺度融合网络高光谱图像分类方法

赵凤 耿苗苗 刘汉强 张俊杰 於俊

赵凤, 耿苗苗, 刘汉强, 张俊杰, 於俊. 卷积神经网络与视觉Transformer联合驱动的跨层多尺度融合网络高光谱图像分类方法[J]. 电子与信息学报. doi: 10.11999/JEIT231209
引用本文: 赵凤, 耿苗苗, 刘汉强, 张俊杰, 於俊. 卷积神经网络与视觉Transformer联合驱动的跨层多尺度融合网络高光谱图像分类方法[J]. 电子与信息学报. doi: 10.11999/JEIT231209
ZHAO Feng, GENG Miaomiao, LIU Hanqiang, ZHANG Junjie, YU Jun. Convolutional Neural Network and Vision Transformer-driven Cross-layer Multi-scale Fusion Network for Hyperspectral Image Classification[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT231209
Citation: ZHAO Feng, GENG Miaomiao, LIU Hanqiang, ZHANG Junjie, YU Jun. Convolutional Neural Network and Vision Transformer-driven Cross-layer Multi-scale Fusion Network for Hyperspectral Image Classification[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT231209

卷积神经网络与视觉Transformer联合驱动的跨层多尺度融合网络高光谱图像分类方法

doi: 10.11999/JEIT231209
基金项目: 国家自然科学基金(62071379, 62071378, 62106196),陕西高校青年创新团队
详细信息
    作者简介:

    赵凤:女,教授,研究方向为智能信息处理、模式识别与图像处理

    耿苗苗:女 ,硕士生 ,研究方向为遥感图像处理

    刘汉强:男,副教授,研究方向为模式识别与图像处理

    张俊杰:男,博士生,研究方向为遥感图像处理与识别

    於俊:男,副教授,研究方向为人工智能、多媒体计算、计算机视觉

    通讯作者:

    耿苗苗 MiaoGeng1016@163.com

  • 中图分类号: TN911.73; TP751

Convolutional Neural Network and Vision Transformer-driven Cross-layer Multi-scale Fusion Network for Hyperspectral Image Classification

Funds: The National Natural Science Foundation of China (62071379, 62071378, 62106196), The Youth Innovation Team of Shaanxi Universities
  • 摘要: 高光谱图像(HSI)分类是地球科学和遥感影像处理任务中最受关注的研究热点之一。近年来,卷积神经网络(CNN)和视觉Transformer相结合的方法,通过综合考虑局部-全局信息,在HSI分类任务中取得了成功。然而,HSI中地物具有丰富的纹理信息和复杂多样的结构,且不同地物之间存在尺度差异。现有的二者结合的方法通常对多尺度地物目标的纹理和结构信息的提取能力有限。为了克服上述局限性,该文提出CNN与视觉Transformer联合驱动的跨层多尺度融合网络HSI分类方法。首先,从结合CNN与视觉Transformer的角度出发,设计了跨层多尺度局部-全局特征提取模块分支,其主要由卷积嵌入的视觉Transformer和跨层特征融合模块构成。具体来说,卷积嵌入的视觉Transformer通过深度融合多尺度CNN与视觉Transformer实现了多尺度局部-全局特征信息的有效提取,从而增强网络对不同尺度地物的关注。进一步地,跨层特征融合模块深度聚合了不同层次的多尺度局部-全局特征信息,以综合考虑地物的浅层纹理信息和深层结构信息。其次,构建了分组多尺度卷积模块分支来挖掘HSI中密集光谱波段潜在的多尺度特征。最后,为了增强网络对HSI中局部波段细节和整体光谱信息的挖掘,设计了残差分组卷积模块对局部-全局光谱特征进行提取。Indian Pines, Houston 2013和Salinas Valley 3个HSI数据集上的实验结果证实了所提方法的有效性。
  • 图  1  CNN与视觉Transformer联合驱动的跨层多尺度融合网络

    图  2  残差分组卷积模块

    图  3  卷积嵌入的多尺度视觉Transformer

    图  4  多尺度局部-全局注意力机制

    图  5  跨层特征融合模块

    图  6  分组多尺度卷积模块

    图  7  编码器层数和多头注意力头数对分类性能的影响

    图  8  图像块大小对不同方法分类性能的影响

    图  9  IP数据集上不同方法的分类结果图

    图  10  不同方法在不同训练样本数量下的总体精度

    图  11  IP, SV和HU数据集上不同方法的参数量与OA

    表  1  IP, HU, SV数据集基本信息

    数据集 尺寸 光谱范围(μm) 光谱维度 类别数目 采集设备
    IP 145 × 145 0.45~2.5 200 16 AVIRIS
    HU 349 × 1905 0.364~1.064 144 15 ITRES CASI-1500
    SV 512 × 217 0.4~2.5 204 16 AVIRIS
    下载: 导出CSV

    表  2  不同模块对网络分类性能的影响(%)

    数据集 评价指标 网络
    网络1 网络2 网络3 网络4 网络5 网络6
    IP OA 83.94 ± 2.51 89.57 ± 0.51 98.66 ± 0.38 98.83 ± 0.25 98.91 ± 0.33 99.13 ± 0.16
    AA 76.92 ± 4.70 83.09 ± 1.15 97.81 ± 1.48 97.90 ± 1.20 97.93 ± 1.06 98.38 ± 0.87
    Kappa 81.60 ± 2.90 88.06 ± 0.59 98.47 ± 0.29 98.76 ± 0.37 98.76 ± 0.37 99.01 ± 0.19
    SV OA 87.97 ± 1.80 91.06 ± 0.79 98.73 ± 0.32 98.80 ± 0.34 98.87 ± 0.29 98.89 ± 0.25
    AA 91.04 ± 1.71 94.46 ± 0.79 99.22 ± 0.16 99.24 ± 0.12 99.26 ± 0.20 99.30 ± 0.21
    Kappa 86.54 ± 2.03 90.03 ± 0.88 98.58 ± 0.36 98.65 ± 0.23 98.74 ± 0.33 98.77 ± 0.28
    HU OA 84.99 ± 2.23 94.61 ± 0.44 98.85 ± 0.42 98.88 ± 0.43 98.90 ± 0.53 99.07 ± 0.22
    AA 84.77 ± 2.31 93.38 ± 0.52 98.95 ± 0.37 98.96 ± 0.21 98.97 ± 0.49 99.09 ± 0.20
    Kappa 83.76 ± 2.41 94.17 ± 0.48 98.76 ± 0.45 98.82 ± 0.51 98.99 ± 0.32 98.99 ± 0.24
    下载: 导出CSV

    表  3  IP数据集上不同方法的分类性能(%)

    类别FDMFN[10]LSSCM[12]RSSAN[11]ViT[15]IFormer[17]SSFTT[18]GAHT[19]CTMixer[20]CTFSN[21]CTCMN
    188.0558.7865.1223.6663.6694.1578.7894.6393.1796.83
    297.0686.6290.7376.1089.3299.0496.1498.5898.5499.06
    397.8288.0789.3379.8393.8198.3997.5498.6698.4598.94
    496.4187.1170.0082.4981.6698.0893.2298.7897.8598.32
    597.0489.3893.0672.9794.7498.0996.0896.6197.9697.82
    698.9888.0198.1698.7198.8999.6799.5499.2799.6299.62
    788.0066.9268.0054.8064.8095.2091.6096.4093.6094.40
    899.9589.8599.6799.9899.86100.099.86100.0100.0100.0
    992.2220.5347.7842.2265.5676.6785.5690.5694.4497.78
    1097.4488.0591.1568.7392.3898.5296.3598.4998.5299.12
    1199.0895.2595.1489.1896.6099.4597.5499.0699.2099.40
    1292.9284.5479.2566.7381.4795.6692.4997.2397.9298.09
    1399.9597.7998.7599.9599.67100.098.0499.67100.099.51
    1498.8494.7897.4396.0398.7899.0698.0599.5499.2499.80
    1597.4384.2375.9179.4180.8896.5595.8299.3997.3199.05
    1697.8696.1983.81100.093.5795.7190.1297.1496.6796.79
    OA97.8891.3691.7283.9393.3898.6896.8498.7698.7699.14
    AA96.1975.8583.9576.9287.2296.5194.1797.7597.6598.40
    Kappa97.5890.1490.5481.6092.4398.5096.3998.5898.5899.02
    下载: 导出CSV

    表  4  SV数据集上不同方法的分类性能(%)

    类别FDMFN[10]LSSCM[12]RSSAN[11]ViT[15]IFormer[17]SSFTT[18]GAHT[19]CTMixer[20]CTFSN[21]CTCMN
    199.7997.8399.4098.2099.7198.7178.78100.099.94100.0
    299.9999.8599.5099.9599.9899.5396.14100.0100.0100.0
    399.2688.9397.0878.2899.9898.4597.5499.9699.8399.98
    499.2199.3098.4397.5398.6498.8893.2298.5798.8998.95
    599.5197.3698.5492.5698.7999.1596.0899.0399.6799.22
    6100.099.9599.97100.0100.099.9499.5499.99100.0100.0
    799.9899.7298.9496.0399.0598.9391.6099.9899.9799.98
    892.5290.6190.9089.1788.4390.2999.8697.1493.0297.11
    999.9999.9999.8999.97100.099.7085.56100.0100.0100.0
    1097.9193.9396.3591.1696.7597.4196.3598.6097.9599.01
    1198.0190.0292.7880.4595.7595.9097.5499.9099.8299.81
    1299.9599.3899.5397.9299.83100.092.4999.9799.9999.98
    1399.9797.8198.8098.6899.6599.4098.0499.5999.9299.68
    1499.4498.5997.2195.7397.7997.8798.0599.0199.6799.01
    1592.8486.8582.7749.0572.1084.8895.8297.3092.4997.74
    1698.1895.3295.0791.9798.4297.1990.1299.7599.4699.76
    OA97.1594.8091.7287.9793.2495.2496.8498.8297.3298.93
    AA98.5395.9683.9591.0496.4997.2694.1799.2998.7899.38
    Kappa96.8394.2190.5486.5494.7094.7096.3998.6997.0298.80
    下载: 导出CSV

    表  5  HU数据集上不同方法的分类性能(%)

    类别FDMFN[10]LSSCM[12]RSSAN[11]ViT[15]IFormer[17]SSFTT[18]GAHT[19]CTMixer[20]CTFSN[21]CTCMN
    199.0696.8797.4189.4798.4399.0295.8297.8898.6599.26
    299.3597.0299.5596.6399.3599.4395.9697.6998.9299.60
    399.8999.6299.5295.9099.9799.9499.8199.5499.7099.87
    496.1492.5897.5194.2697.4797.9695.3896.1895.6399.30
    599.9899.6699.3899.4999.9699.9599.7999.9499.94100.0
    696.7587.2985.4586.1390.1796.3495.6898.3996.8599.11
    797.1491.4894.2187.8095.5997.6095.9497.3297.7798.35
    895.2386.0890.4561.3289.4595.7092.5996.0493.8197.26
    995.8687.5992.0579.0594.0297.4795.1996.1294.2198.25
    1099.5496.0396.2479.3898.3499.5498.7999.8897.72100.0
    1197.2090.4192.3877.4196.2298.9197.1896.7496.6999.24
    1298.4594.0396.1280.5995.6697.7795.3096.4897.5498.94
    1394.9391.2683.5547.2395.1997.9995.8393.6394.6497.20
    14100.099.8499.1996.9199.97100.0100.0100.099.95100.0
    1599.9899.9299.81100.0100.099.97100.099.93100.0100.0
    OA97.9693.9795.4184.7796.6598.5096.8897.7197.3399.07
    AA97.7794.2194.8583.7696.4298.3496.3397.4197.4799.09
    Kappa98.9096.8795.0489.4798.4398.9295.8297.8897.1298.99
    下载: 导出CSV

    表  6  IP, SV和HU数据集上不同方法的参数量、训练时间和测试时间

    数据集 评价指标 FDMFN LSSCM RSSAN ViT IFormer SSFTT GAHT CTMixer CTFSN CTCMN
    IP 参数量(M) 0.54 0.01 0.17 0.68 0.63 0.93 1.22 0.61 0.55 0.79
    训练时间(s) 34.43 38.30 38.79 27.55 90.85 35.36 98.87 82.24 39.32 95.19
    测试时间(s) 1.92 1.73 1.95 2.06 6.35 2.40 6.99 5.22 2.41 6.29
    SV 参数量(M) 0.54 0.01 0.17 0.68 0.62 0.95 0.97 0.61 0.54 0.77
    训练时间(s) 18.91 20.84 20.93 14.67 46.83 18.81 49.69 44.33 20.75 48.70
    测试时间(s) 12.20 10.29 10.80 10.94 29.31 13.11 28.52 28.24 11.88 23.80
    HU 参数量(M) 0.53 0.01 0.14 0.66 0.63 0.72 1.19 0.64 0.54 0.79
    训练时间(s) 45.01 50.34 51.607 34.68 66.362 40.89 99.03 64.32 51.88 68.36
    测试时间(s) 52.90 50.46 57.57 54.43 68.274 57.39 116.52 69.27 63.11 89.09
    下载: 导出CSV
  • [1] BIOUCAS-DIAS J M, PLAZA A, CAMPS-VALLS G, et al. Hyperspectral remote sensing data analysis and future challenges[J]. IEEE Geoscience and Remote Sensing Magazine, 2013, 1(2): 6–36. doi: 10.1109/MGRS.2013.2244672.
    [2] KHAN I H, LIU Haiyan, LI Wei, et al. Early detection of powdery mildew disease and accurate quantification of its severity using hyperspectral images in wheat[J]. Remote Sensing, 2021, 13(18): 3612. doi: 10.3390/rs13183612.
    [3] SUN Mingyue, LI Qian, JIANG Xuzi, et al. Estimation of soil salt content and organic matter on arable land in the yellow river delta by combining UAV hyperspectral and landsat-8 multispectral imagery[J]. Sensors, 2022, 22(11): 3990. doi: 10.3390/s22113990.
    [4] STUART M B, MCGONIGLE A J S, and WILLMOTT J R. Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems[J]. Sensors, 2019, 19(14): 3071. doi: 10.3390/s19143071.
    [5] BAZI Y and MELGANI F. Toward an optimal SVM classification system for hyperspectral remote sensing images[J]. IEEE Transactions on Geoscience and Remote Sensing, 2006, 44(11): 3374–3385. doi: 10.1109/TGRS.2006.880628.
    [6] GU Yanfeng, CHANUSSOT J, JIA Xiuping, et al. Multiple kernel learning for hyperspectral image classification: A review[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(11): 6547–6565. doi: 10.1109/TGRS.2017.2729882.
    [7] LICCIARDI G A and CHANUSSOT J. Nonlinear PCA for visible and thermal hyperspectral images quality enhancement[J]. IEEE Geoscience and Remote Sensing Letters, 2015, 12(6): 1228–1231. doi: 10.1109/LGRS.2015.2389269.
    [8] ROY S K, KRISHNA G, DUBEY S R, et al. HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2020, 17(2): 277–281. doi: 10.1109/LGRS.2019.2918719.
    [9] GONG Zhiqiang, ZHONG Ping, YU Yang, et al. A CNN with multiscale convolution and diversified metric for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(6): 3599–3618. doi: 10.1109/TGRS.2018.2886022.
    [10] MENG Zhe, LI Lingling, JIAO Licheng, et al. Fully dense multiscale fusion network for hyperspectral image classification[J]. Remote Sensing, 2019, 11(22): 2718. doi: 10.3390/rs11222718.
    [11] ZHU Minghao, JIAO Licheng, LIU Fang, et al. Residual spectral–spatial attention network for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(1): 449–462. doi: 10.1109/TGRS.2020.2994057.
    [12] MENG Zhe, JIAO Licheng, LIANG Miaomiao, et al. A lightweight spectral-spatial convolution module for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 5505105. doi: 10.1109/LGRS.2021.3069202.
    [13] 刘娜, 李伟, 陶然. 图信号处理在高光谱图像处理领域的典型应用[J]. 电子与信息学报, 2023, 45(5): 1529–1540. doi: 10.11999/JEIT220887.

    LIU Na, LI Wei, and TAO Ran. Typical application of graph signal processing in hyperspectral image processing[J]. Journal of Electronics & Information Technology, 2023, 45(5): 1529–1540. doi: 10.11999/JEIT220887.
    [14] HONG Danfeng, GAO Lianru, YAO Jing, et al. Graph convolutional networks for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(7): 5966–5978. doi: 10.1109/TGRS.2020.3015157.
    [15] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[C]. 9th International Conference on Learning Representations, 2021.
    [16] HONG Danfeng, HAN Zhu, YAO Jing, et al. SpectralFormer: Rethinking hyperspectral image classification with transformers[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5518615. doi: 10.1109/TGRS.2021.3130716.
    [17] REN Qi, TU Bing, LIAO Sha, et al. Hyperspectral image classification with IFormer network feature extraction[J]. Remote Sensing, 2022, 14(19): 4866. doi: 10.3390/rs14194866.
    [18] SUN Le, ZHAO Guangrui, ZHENG Yuhui, et al. Spectral-spatial feature tokenization transformer for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5522214. doi: 10.1109/TGRS.2022.3144158.
    [19] MEI Shaohui, SONG Chao, MA Mingyang, et al. Hyperspectral image classification using group-aware hierarchical transformer[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5539014. doi: 10.1109/TGRS.2022.3207933.
    [20] ZHANG Junjie, MENG Zhe, ZHAO Feng, et al. Convolution transformer mixer for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2022, 19: 6014205. doi: 10.1109/LGRS.2022.3208935.
    [21] ZHAO Feng, LI Shijie, ZHANG Junjie, et al. Convolution transformer fusion splicing network for hyperspectral image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2023, 20: 5501005. doi: 10.1109/LGRS.2022.3231874.
    [22] LIU Na, LI Wei, SUN Xian, et al. Remote sensing image fusion with task-inspired multiscale nonlocal-attention network[J]. IEEE Geoscience and Remote Sensing Letters, 2023, 20: 5502505. doi: 10.1109/LGRS.2023.3254049.
    [23] YANG Jiaqi, DU Bo, and WU Chen. Hybrid vision transformer model for hyperspectral image classification[C]. IGARSS 2022 - 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 2022: 1388–1391. doi: 10.1109/IGARSS46834.2022.9884262.
    [24] SANDLER M, HOWARD A, ZHU Menglong, et al. MobileNetV2: Inverted residuals and linear bottlenecks[C]. The IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 4510–4520. doi: 10.1109/CVPR.2018.00474.
    [25] WANG Qilong, WU Banggu, ZHU Pengfei, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks[C]. The IEEE Conference on Computer Vision and Pattern Recognition, Seattle, USA, 2020: 11531–11539. doi: 10.1109/CVPR42600.2020.01155.
  • 加载中
图(11) / 表(6)
计量
  • 文章访问数:  51
  • HTML全文浏览量:  16
  • PDF下载量:  4
  • 被引次数: 0
出版历程
  • 收稿日期:  2023-11-01
  • 修回日期:  2024-03-31
  • 网络出版日期:  2024-04-18

目录

    /

    返回文章
    返回