高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于空间与光谱注意力的光学图像和SAR图像特征融合分类方法

姜文 潘洁 朱金彪 岳昔娟

姜文, 潘洁, 朱金彪, 岳昔娟. 基于空间与光谱注意力的光学图像和SAR图像特征融合分类方法[J]. 电子与信息学报, 2023, 45(3): 987-995. doi: 10.11999/JEIT220063
引用本文: 姜文, 潘洁, 朱金彪, 岳昔娟. 基于空间与光谱注意力的光学图像和SAR图像特征融合分类方法[J]. 电子与信息学报, 2023, 45(3): 987-995. doi: 10.11999/JEIT220063
JIANG Wen, PAN Jie, ZHU Jinbiao, YUE Xijuan. Feature Fusion Classification for Optical Image and SAR Image Based on Spatial-spectral Attention[J]. Journal of Electronics & Information Technology, 2023, 45(3): 987-995. doi: 10.11999/JEIT220063
Citation: JIANG Wen, PAN Jie, ZHU Jinbiao, YUE Xijuan. Feature Fusion Classification for Optical Image and SAR Image Based on Spatial-spectral Attention[J]. Journal of Electronics & Information Technology, 2023, 45(3): 987-995. doi: 10.11999/JEIT220063

基于空间与光谱注意力的光学图像和SAR图像特征融合分类方法

doi: 10.11999/JEIT220063
详细信息
    作者简介:

    姜文:男,助理研究员,博士,研究方向为合成孔径雷达信号处理和应用

    潘洁:女,正高级工程师,博士,研究方向为航空遥感技术与应用

    朱金彪:男,正高级工程师,硕士,研究方向为航空遥感技术与应用

    岳昔娟:女,高级工程师,博士,研究方向为合成孔径雷达信息处理和应用

    通讯作者:

    潘洁 panjie@aircas.ac.cn

  • 中图分类号: TN958; TP751

Feature Fusion Classification for Optical Image and SAR Image Based on Spatial-spectral Attention

  • 摘要: 针对多源遥感图像的差异性和互补性问题,该文提出一种基于空间与光谱注意力的光学图像和SAR图像特征融合分类方法。首先利用卷积神经网络分别进行光学图像和SAR图像的特征提取,设计空间注意力和光谱注意力组成的注意力模块分析特征重要程度,生成不同特征的权重进行特征融合增强,同时减弱对无效信息的关注,从而提高光学和SAR图像融合分类精度。通过在两组光学和SAR图像数据集上进行对比实验,结果表明所提方法取得更高的融合分类精度。
  • 图  1  空间与光谱注意力的特征融合模型

    图  2  注意力模块

    图  3  数据集A

    图  4  数据集B

    图  5  数据集A在各种方法下的结果对比图

    图  6  数据集B在各种方法下的结果对比图

    图  7  数据集D在各种方法下的结果对比图

    表  1  数据集各类别数目

    类别颜色数据集A数据集A训练集数据集A验证集数据集A测试集数据集B数据集B训练集数据集B验证集数据集B测试集
    建筑1296292000020000896292002512000020000160251
    公路30522130000200002552213202383000020000270238
    裸土28438830000200002343885398885000040000449888
    草坪69580870000600005658082638172000020000223817
    树木849541000010000649541756152000020000135615
    总计1500000160000130000121000015000001400001200001240000
    下载: 导出CSV

    表  2  不同识别方法对应的5类数据集A识别正确率(%)及评价指标OA, AA, Kappa值

    CNN(S)CNN-ATT(S)CNN(O)CNN-ATT(O)CNN(S+O)Two-branchMDL-MiddleTAFFNCNN-ATT(S+O)
    建筑公路裸土草坪树木72.1875.9153.5971.5147.3775.9776.1154.4971.9243.3896.9494.7684.1795.5870.5096.2695.0886.4194.8970.8195.6994.9586.4094.5980.0395.5891.5683.1991.4960.9097.2092.3584.9491.8071.3594.9894.1787.8193.8478.5794.9695.6891.9292.7685.91
    OA68.9869.3991.5891.8892.3388.4690.0692.0393.06
    AA64.1164.3788.3988.6990.2284.5587.5389.8792.25
    Kappa0.53490.54150.88050.88430.889783.4785.680.88560.8997
    下载: 导出CSV

    表  3  不同识别方法对应的5类数据集B识别正确率(%)及评价指标OA, AA, Kappa值

    CNN(S)CNN-ATT(S)CNN(O)CNN-ATT(O)CNN(S+O)Two-branchMDL-MiddleTAFFNCNN-ATT(S+O)
    建筑公路裸土草坪树木72.7973.1156.8860.0747.5874.3371.0857.5459.0149.5697.0591.2288.8683.3377.6697.9291.1988.2383.8479.8696.2692.2487.5685.9579.8693.7386.0381.7979.9971.9194.7188.9484.4081.6976.4296.4693.2385.9884.5281.6496.4192.8288.6787.0184.17
    OA59.6760.6088.1188.3088.3482.6485.2188.0389.75
    AA51.7451.9273.0273.4273.6468.9171.0373.6374.84
    Kappa44.970.46260.84360.84580.846277.0480.480.84160.8648
    下载: 导出CSV

    表  4  不同识别方法对应的数据集A和B训练和测试时间(s)

    数据集时间CNN(S)CNN-ATT(S)CNN(O)CNN-ATT(O)CNN(S+O)Two-branchMDL-MiddleTAFFNCNN-ATT(S+O)
    A训练时间339.794492.978368.404505.159534.780282.893365.131440.5521262.196
    测试时间43.58456.76245.95258.97958.45489.59165.505129.132119.097
    B训练时间285.813473.852301.061462.599477.815218.434299.963355.1661060.421
    测试时间42.75454.06542.53866.52656.81859.11666.37299.237103.138
    下载: 导出CSV

    表  5  交叉验证数据集介绍

    类别12345总计
    训练集10000200003000022000900091000
    测试集1167411674313277199971938390750000
    下载: 导出CSV

    表  6  交叉验证识别正确率(%)及评价指标OA, AA, Kappa值

    方法CNN(S+O)Two-branchMDL-MiddleTAFFNCNN-ATT(S+O)
    建筑公路裸土草坪树木91.5470.7987.6445.4949.7587.9765.4186.9747.7050.2189.9767.8385.9449.9251.5792.8066.2987.4549.0057.2792.4664.6887.4453.1758.04
    OA72.1971.5272.6472.7173.07
    AA69.0467.6569.0570.5671.16
    Kappa62.8061.6763.0063.1963.67
    下载: 导出CSV
  • [1] SUKAWATTANAVIJIT C, CHEN Jie, and ZHANG Hongsheng. GA-SVM algorithm for improving land-cover classification using SAR and optical remote sensing data[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(3): 284–288. doi: 10.1109/LGRS.2016.2628406
    [2] 李璐, 杜兰, 何浩男, 等. 基于深度森林的多级特征融合SAR目标识别[J]. 电子与信息学报, 2021, 43(3): 606–614. doi: 10.11999/JEIT200685

    LI Lu, DU Lan, HE Haonan, et al. Multi-level feature fusion SAR automatic target recognition based on deep forest[J]. Journal of Electronics &Information Technology, 2021, 43(3): 606–614. doi: 10.11999/JEIT200685
    [3] ALONSO-GONZÁLEZ A, LÓPEZ-MARTÍNEZ C, PAPATHANASSIOU K P, et al. Polarimetric SAR time series change analysis over agricultural areas[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(10): 7317–7330. doi: 10.1109/TGRS.2020.2981929
    [4] ZHANG Hongsheng and XU Ru. Exploring the optimal integration levels between SAR and optical data for better urban land cover mapping in the Pearl River Delta[J]. International Journal of Applied Earth Observation and Geoinformation, 2018, 64: 87–95. doi: 10.1016/j.jag.2017.08.013
    [5] KUSSUL N, LAVRENIUK M, SKAKUN S, et al. Deep learning classification of land cover and crop types using remote sensing data[J]. IEEE Geoscience and Remote Sensing Letters, 2017, 14(5): 778–782. doi: 10.1109/LGRS.2017.2681128
    [6] ZHANG Xiangrong, WANG Xin, TANG Xu, et al. Description generation for remote sensing images using attribute attention mechanism[J]. Remote Sensing, 2019, 11(6): 612. doi: 10.3390/rs11060612
    [7] XIE Jie, HE Nanjun, FANG Leyuan, et al. Scale-free convolutional neural network for remote sensing scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2019, 57(9): 6916–6928. doi: 10.1109/TGRS.2019.2909695
    [8] KRIZHEVSKY A, SUTSKEVER I, and HINTON G E. ImageNet classification with deep convolutional neural networks[C]. Proceedings of the 25th International Conference on neural Information Processing Systems, Lake Tahoe, USA, 2012: 1097–1105.
    [9] SIMONYAN K and ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv: 1409.1556, 2014.
    [10] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]. Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, USA, 2015: 1–9.
    [11] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Identity mappings in deep residual networks[C]. Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 630–645.
    [12] 周顺杰, 杨学志, 董张玉, 等. 面向特征识别的SAR与可见光图像融合算法研究[J]. 合肥工业大学学报:自然科学版, 2018, 41(7): 900–907. doi: 10.3969/j.issn.1003-5060.2018.07.008

    ZHOU Shunjie, YANG Xuezhi, DONG Zhangyu, et al. Fusion algorithm of SAR and visible images for feature recognition[J]. Journal of Hefei University of Technology, 2018, 41(7): 900–907. doi: 10.3969/j.issn.1003-5060.2018.07.008
    [13] 雷俊杰, 杨武年, 李红, 等. 哨兵光学及SAR卫星影像协同分类研究[J]. 现代电子技术, 2022, 45(2): 135–139. doi: 10.16652/j.issn.1004-373x.2022.02.026

    LEI Junjie, YANG Wunian, LI Hong, et al. Research on cooperative classification of sentinel optical and SAR satellite images[J]. Modern Electronics Technique, 2022, 45(2): 135–139. doi: 10.16652/j.issn.1004-373x.2022.02.026
    [14] KONG Yingying, YAN Biyuan, LIU Yanjuan, et al. Feature-level fusion of polarized SAR and optical images based on random forest and conditional random fields[J]. Remote Sensing, 2021, 13(7): 1323. doi: 10.3390/rs13071323
    [15] XU Zhe, ZHU Jinbiao, GENG Jie, et al. Triplet attention feature fusion network for SAR and optical image land cover classification[C]. Proceedings of 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 2021: 4256–4259.
    [16] XU Xiaodong, LI Wei, RAN Qiong, et al. Multisource remote sensing data classification based on convolutional neural network[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018, 56(2): 937–949. doi: 10.1109/TGRS.2017.2756851
    [17] HONG Danfeng, GAO Lianru, YOKOYA N, et al. More diverse means better: Multimodal deep learning meets remote-sensing imagery classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(5): 4340–4354. doi: 10.1109/TGRS.2020.3016820
    [18] HU Jie, SHEN Li, and SUN Gang. Squeeze-and-excitation networks[C]. Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 7132–7141.
    [19] WOO S, PARK J, LEE J Y, et al. CBAM: Convolutional block attention module[C]. Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 2018: 3–19.
    [20] PARK J, WOO S, LEE J Y, et al. A simple and light-weight attention module for convolutional neural networks[J]. International Journal of Computer Vision, 2020, 128(4): 783–798. doi: 10.1007/s11263-019-01283-0
    [21] FU Jun, LIU Jing, TIAN Haijie, et al. Dual attention network for scene segmentation[C]. Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, USA, 2019: 3141–3149.
    [22] MISRA D. Mish: A self regularized non-Monotonic neural activation function[J]. arXiv: 1908.08681, 2019.
    [23] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [24] 徐从安, 吕亚飞, 张筱晗, 等. 基于双重注意力机制的遥感图像场景分类特征表示方法[J]. 电子与信息学报, 2021, 43(3): 683–691. doi: 10.11999/JEIT200568

    XU Cong’an, LÜ Yafei, ZHANG Xiaohan, et al. A discriminative feature representation method based on dual attention mechanism for remote sensing image scene classification[J]. Journal of Electronics &Information Technology, 2021, 43(3): 683–691. doi: 10.11999/JEIT200568
  • 加载中
图(7) / 表(6)
计量
  • 文章访问数:  1318
  • HTML全文浏览量:  1114
  • PDF下载量:  284
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-01-13
  • 修回日期:  2022-05-28
  • 网络出版日期:  2022-06-10
  • 刊出日期:  2023-03-10

目录

    /

    返回文章
    返回