高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

融合深度可分离卷积的多尺度残差UNet在PolSAR地物分类中的研究

谢雯 王若男 羊鑫 李永恒

谢雯, 王若男, 羊鑫, 李永恒. 融合深度可分离卷积的多尺度残差UNet在PolSAR地物分类中的研究[J]. 电子与信息学报, 2023, 45(8): 2975-2985. doi: 10.11999/JEIT220867
引用本文: 谢雯, 王若男, 羊鑫, 李永恒. 融合深度可分离卷积的多尺度残差UNet在PolSAR地物分类中的研究[J]. 电子与信息学报, 2023, 45(8): 2975-2985. doi: 10.11999/JEIT220867
XIE Wen, WANG Ruonan, YANG Xin, LI Yongheng. Research on Multi-scale Residual UNet Fused with Depthwise Separable Convolution in PolSAR Terrain Classification[J]. Journal of Electronics & Information Technology, 2023, 45(8): 2975-2985. doi: 10.11999/JEIT220867
Citation: XIE Wen, WANG Ruonan, YANG Xin, LI Yongheng. Research on Multi-scale Residual UNet Fused with Depthwise Separable Convolution in PolSAR Terrain Classification[J]. Journal of Electronics & Information Technology, 2023, 45(8): 2975-2985. doi: 10.11999/JEIT220867

融合深度可分离卷积的多尺度残差UNet在PolSAR地物分类中的研究

doi: 10.11999/JEIT220867
基金项目: 国家自然科学基金(61901365, 62071379),陕西省自然科学基金(2019JQ-377),陕西省教育厅专项科研计划(19JK0805),西安邮电大学西邮新星团队项目(xyt2016-01)
详细信息
    作者简介:

    谢雯:女,讲师,研究方向为遥感图像处理、深度学习以及稀疏表示

    王若男:女,硕士生,研究方向为深度学习和PolSAR图像分类

    羊鑫:男,硕士生,研究方向为深度学习和PolSAR图像分类

    李永恒:男,硕士生,研究方向为智能图像处理

    通讯作者:

    谢雯 xiewen@xupt.edu.cn

  • 中图分类号: TN957

Research on Multi-scale Residual UNet Fused with Depthwise Separable Convolution in PolSAR Terrain Classification

Funds: The National Natural Science Foundation of China (61901365, 62071379), The Natural Science Foundation of Shaanxi Province (2019JQ-377), Shaanxi Provincial Department of Education Special Scientific Research Program (19JK0805), The New Star Team of Xi'an University of Posts and Telecommunications (xyt2016-01)
  • 摘要: 极化合成孔径雷达(Polarimetric Synthetic Aperture Radar, PolSAR)地物分类作为合成孔径雷达(Synthetic Aperture Radar, SAR)图像解译的重要研究内容之一,越来越受到国内外学者的广泛关注。不同于自然图像,PolSAR数据集不仅具有独特的数据属性同时还属于小样本数据集,因此如何更充分地利用数据特性以及标签样本是需要重点考虑的内容。基于以上问题,该文在UNet基础上提出了一种新的用于PolSAR地物分类的网络架构——多尺度可分离残差UNet (Multiscale Separable Residual Unet, MSR-Unet)。该网络结构首先利用深度可分离卷积替代普通2D卷积,分别提取输入数据的空间特征和通道特征,降低特征的冗余度;其次提出改进的多尺度残差结构,该结构以残差结构为基础,通过设置不同大小的卷积核获得不同尺度的特征,同时采用密集连接对特征进行复用,使用该结构不仅能在一定程度上增加网络深度,获取更优特征,还能使网络充分利用标签样本,增强特征传递效率,从而提高PolSAR地物的分类精度。在3个标准数据集上的实验结果表明:与传统分类方法及其它主流深度学习网络模型如UNet相比,MSR-Unet网络结构能够在不同程度上提高平均准确率、总体准确率和Kappa系数且具有更好的鲁棒性。
  • 图  1  逐通道卷积

    图  2  逐点卷积

    图  3  多尺度残差结构

    图  4  改进的多尺度残差结构

    图  5  MSR-Unet网络结构

    图  6  不同方法在德国ESRA数据集上的分类结果图

    图  7  不同方法在西安地区数据集上的分类结果图

    图  8  不同方法在San Francisco数据集上的分类结果图

    图  9  不同网络模型在不同数据集上的评价指标值分布带

    表  1  消融实验结果表

    模型UnetResDSCMResEMRes德国ESRA数据集西安地区数据集San Francisco数据集
    AAOAKappaAAOAKappaAAOAKappa
    A0.94890.95590.92620.97520.97570.96220.94150.96920.9499
    B0.95640.96120.93490.97210.97850.96660.94870.97170.9538
    C0.96390.96750.94560.97580.97910.96750.95120.97540.9599
    D0.96690.97080.95110.98520.98470.97600.97090.98510.9756
    E0.96910.97260.95420.98440.98640.97880.97760.98790.9803
    下载: 导出CSV

    表  2  不同方法在德国ESRA数据集上的分类精度

    方法SVM[3]Wishart[5]CNN[10]FCN[18]MS-FCN[28]UNet[19]MSR-Unet
    建筑区0.73160.67320.84470.91240.93390.91800.9487
    林区0.84770.81090.87620.95020.94450.95570.9765
    开放区0.88450.90620.95150.97110.96910.97320.9820
    AA0.82130.79680.89070.94420.94920.94890.9691
    OA0.84630.83460.91130.95200.95540.95590.9726
    Kappa0.73240.71180.84790.91970.92520.92620.9542
    下载: 导出CSV

    表  3  不同方法在西安地区数据集上的分类精度及AA、OA、Kappa值

    方法SVM[3]Wishart[5]CNN[10]FCN[18]MS-FCN[28]UNet[19]MSR-Unet
    草地0.78700.82090.92080.97400.96770.97970.9887
    城市0.82020.87450.95280.97250.97010.97620.9903
    庄稼0.27110.15440.90040.98520.95310.98380.9880
    河流0.90530.73430.93190.95280.95710.96120.9707
    AA0.69590.64600.92650.97110.96200.97520.9844
    OA0.79580.71530.93240.97060.96630.97570.9864
    Kappa0.68030.59420.89400.95420.94750.96220.9788
    下载: 导出CSV

    表  4  不同方法在San Francisco数据集上的分类精度及AA、OA、Kappa值

    方法SVM[3]Wishart[5]CNN[10]FCN[18]MS-FCN[28]UNet[19]MSR-Unet
    发达城市081250.81670.80960.90440.92770.89890.9738
    水域0.97580.99540.98980.99520.99820.99780.9991
    高密度城市0.70470.78280.79360.94490.89430.94210.9762
    低密度城市0.62490.66230.79050.92220.88400.93090.9742
    植物0.76420.59790.90510.93050.93340.94060.9647
    AA0.77640.77100.85770.93940.92870.94150.9776
    OA0.82930.82390.90260.96620.95820.96920.9879
    Kappa0.75110.74450.85690.94480.93160.94990.9803
    下载: 导出CSV

    表  5  不同深度学习方法的参数量及FLOPs比较结果

    数据集CNNFCNMS-FCNUnetMSR-Unet
    西安数据集参数量(M)23.3816.5234.975.884.05
    FLOPs (G)0.662.9917.913.871.94
    德国ESAR数据集
    San Francisco数据集
    参数量 (M)23.3816.5234.975.884.05
    FLOPs (G)0.6612.0071.6515.507.77
    下载: 导出CSV
  • [1] HUANG Zhongling, DATCU M, PAN Zongxu, et al. Deep SAR-Net: Learning objects from signals[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 161: 179–193. doi: 10.1016/j.isprsjprs.2020.01.016
    [2] JAFARI M, MAGHSOUDI Y, and VALADAN ZOEJ M J. A new method for land cover characterization and classification of polarimetric SAR data using polarimetric signatures[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2015, 8(7): 3595–3607. doi: 10.1109/JSTARS.2014.2387374
    [3] FUKUDA S and HIROSAWA H. Support vector machine classification of land cover: Application to polarimetric SAR data[C]. Proceedings of IEEE 2001 International Geoscience and Remote Sensing Symposium, Sydney, Australia, 2001: 187–189.
    [4] OKWUASHI O, NDEHEDEHE C E, OLAYINKA D N, et al. Deep support vector machine for PolSAR image classification[J]. International Journal of Remote Sensing, 2021, 42(17): 6498–6536. doi: 10.1080/01431161.2021.1939910
    [5] LEE J S, GRUNES M R, AINSWORTH T L, et al. Quantitative comparison of classification capability: Fully-polarimetric versus partially polarimetric SAR[C]. Proceedings of IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment, Honolulu, USA, 2000: 1101–1103.
    [6] 魏志强, 毕海霞. 基于聚类识别的极化SAR图像分类[J]. 电子与信息学报, 2018, 40(12): 2795–2803. doi: 10.11999/JEIT180229

    WEI Zhiqiang and BI Haixia. PolSAR image classification based on discriminative clustering[J]. Journal of Electronics &Information Technology, 2018, 40(12): 2795–2803. doi: 10.11999/JEIT180229
    [7] CHEN Yanqiao, JIAO Licheng, LI Yangyang, et al. Multilayer projective dictionary pair learning and sparse autoencoder for PolSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(12): 6683–6694. doi: 10.1109/TGRS.2017.2727067
    [8] XIE Wen, MA Gaini, HUA Wenqiang, et al. Complex-valued wishart stacked auto-encoder network for Polsar image classification[C]. Proceedings of 2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 2019: 3193–3196.
    [9] AI Jiaqiu, WANG Feifan, MAO Yuxiang, et al. A fine PolSAR terrain classification algorithm using the texture feature fusion-based improved convolutional autoencoder[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5218714. doi: 10.1109/TGRS.2021.3131986
    [10] CHEN Siwei and TAO Chensong. PolSAR image classification using polarimetric-feature-driven deep convolutional neural network[J]. IEEE Geoscience and Remote Sensing Letters, 2018, 15(4): 627–631. doi: 10.1109/LGRS.2018.2799877
    [11] HUA Wenqiang, ZHANG Cong, XIE Wen, et al. Polarimetric SAR image classification based on ensemble dual-branch CNN and superpixel algorithm[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2022, 15: 2759–2772. doi: 10.1109/JSTARS.2022.3162953
    [12] CUI Yuanhao, LIU Fang, JIAO Licheng, et al. Polarimetric multipath convolutional neural network for PolSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5207118. doi: 10.1109/TGRS.2021.3071559
    [13] 秦先祥, 余旺盛, 王鹏, 等. 基于复值卷积神经网络样本精选的极化SAR图像弱监督分类方法[J]. 雷达学报, 2020, 9(3): 525–538. doi: 10.12000/JR20062

    QIN Xianxiang, YU Wangsheng, WANG Peng, et al. Weakly supervised classification of PolSAR images based on sample refinement with complex-valued convolutional neural network[J]. Journal of Radars, 2020, 9(3): 525–538. doi: 10.12000/JR20062
    [14] LIU Fang, JIAO Licheng, and TANG Xu. Task-oriented GAN for PolSAR image classification and clustering[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019, 30(9): 2707–2719. doi: 10.1109/TNNLS.2018.2885799
    [15] LI Xiufang, SUN Qigong, LI Lingling, et al. SSCV-GANs: Semi-supervised complex-valued GANs for PolSAR image classification[J]. IEEE Access, 2020, 8: 146560–146576. doi: 10.1109/ACCESS.2020.3004591
    [16] YANG Chen, HOU Biao, CHANUSSOT J, et al. N-Cluster loss and hard sample generative deep metric learning for PolSAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5210516. doi: 10.1109/TGRS.2021.3099840
    [17] 贺丰收, 何友, 刘准钆, 等. 卷积神经网络在雷达自动目标识别中的研究进展[J]. 电子与信息学报, 2020, 42(1): 119–131. doi: 10.11999/JEIT180899

    HE Fengshou, HE You, LIU Zhunga, et al. Research and development on applications of convolutional neural networks of radar automatic target recognition[J]. Journal of Electronics &Information Technology, 2020, 42(1): 119–131. doi: 10.11999/JEIT180899
    [18] LI Yangyang, CHEN Yanqiao, LIU Guangyuan, et al. A novel deep fully convolutional network for PolSAR image classification[J]. Remote Sensing, 2018, 10(12): 1984. doi: 10.3390/rs10121984
    [19] RONNEBERGER O, FISCHER P, and BROX T. U-Net: Convolutional networks for biomedical image segmentation[C]. Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 2015: 234–241.
    [20] KOTRU R, SHAIKH M, TURKAR V, et al. Semantic segmentation of PolSAR images for various land cover features[C]. Proceedings of 2021 IEEE International Geoscience and Remote Sensing Symposium, Brussels, Belgium, 2021: 351–354.
    [21] HOWARD A G, ZHU Menglong, CHEN Bo, et al. MobileNets: Efficient convolutional neural networks for mobile vision applications[J]. arXiv: 1704.04861, 2017.
    [22] CHOLLET F. Xception: Deep learning with depthwise separable convolutions[C]. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 1800–1807.
    [23] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, USA, 2016: 770–778.
    [24] HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]. Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, USA, 2017: 2261–2269.
    [25] AI Jiaqiu, MAO Yuxiang, LUO Qiwu, et al. SAR target classification using the multikernel-size feature fusion-based convolutional neural network[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5214313. doi: 10.1109/TGRS.2021.3106915
    [26] SHANG Ronghua, HE Jianghai, WANG Jiaming, et al. Dense connection and depthwise separable convolution based CNN for polarimetric SAR image classification[J]. Knowledge-Based Systems, 2020, 194: 105542. doi: 10.1016/j.knosys.2020.105542
    [27] 孙军梅, 葛青青, 李秀梅, 等. 一种具有边缘增强特点的医学图像分割网络[J]. 电子与信息学报, 2022, 44(5): 1643–1652. doi: 10.11999/JEIT210784

    SUN Junmei, GE Qingqing, LI Xiumei, et al. A medical image segmentation network with boundary enhancement[J]. Journal of Electronics &Information Technology, 2022, 44(5): 1643–1652. doi: 10.11999/JEIT210784
    [28] WU Wenjin, LI Hailei, LI Xinwu, et al. PolSAR image semantic segmentation based on deep transfer learning—realizing smooth classification with small training sets[J]. IEEE Geoscience and Remote Sensing Letters, 2019, 16(6): 977–981. doi: 10.1109/LGRS.2018.2886559
  • 加载中
图(9) / 表(5)
计量
  • 文章访问数:  650
  • HTML全文浏览量:  468
  • PDF下载量:  122
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-06-29
  • 修回日期:  2023-03-30
  • 网络出版日期:  2023-04-04
  • 刊出日期:  2023-08-21

目录

    /

    返回文章
    返回