Advanced Search
Turn off MathJax
Article Contents
LIAO Diling, LAI Tao, HUANG Haifeng, WANG Qingsong. LightMamba: A Lightweight Mamba Network for the Joint Classification of HSI and LiDAR Data[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250981
Citation: LIAO Diling, LAI Tao, HUANG Haifeng, WANG Qingsong. LightMamba: A Lightweight Mamba Network for the Joint Classification of HSI and LiDAR Data[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250981

LightMamba: A Lightweight Mamba Network for the Joint Classification of HSI and LiDAR Data

doi: 10.11999/JEIT250981 cstr: 32379.14.JEIT250981
Funds:  The National Natural Science Foundation of China (62273365), Xiaomi Young Talents Program
  • Received Date: 2025-09-25
  • Accepted Date: 2025-12-30
  • Rev Recd Date: 2025-12-30
  • Available Online: 2026-01-09
  •   Objective  The joint classification of HyperSpectral Imagery (HSI) and Light Detection And Ranging (LiDAR) data is a critical task in remote sensing, where complementary spectral and spatial information is exploited to improve land cover recognition accuracy. However, mainstream deep learning approaches, particularly those based on Convolutional Neural Networks (CNNs) and Transformers, are constrained by high computational cost and limited efficiency in modeling long-range dependencies. CNN-based methods are effective for local feature extraction but suffer from limited receptive fields and increased parameter counts when scaled. Transformer architectures provide global context modeling but incur quadratic computational complexity due to self-attention mechanisms, which leads to prohibitive costs when processing high-dimensional remote sensing data. To address these limitations, a lightweight network architecture named LightMamba is proposed. The model leverages an advanced State Space Model (SSM) to achieve efficient and accurate joint classification of HSI and LiDAR data. The objective is to maintain linear computational complexity while effectively fusing multi-source features and capturing global contextual relationships, thereby supporting resource-constrained applications without accuracy degradation.  Methods  The proposed LightMamba framework consists of three core components. First, a MultiSource Alignment Module (MSAM) is designed to address heterogeneity between HSI and LiDAR data. A dual-branch network with shared weights projects both modalities into a unified feature space, which ensures consistent spatial-spectral representation. This shared-weight strategy reduces the parameter count and strengthens inter-modal correlation through the learning of common foundational features. Second, the Multi-Source Lightweight Mamba Module (MSLMM) forms the core of the framework. Aligned HSI and LiDAR feature sequences are processed using a parameter-efficient Mamba architecture. A hybrid parameter-sharing strategy is adopted by combining shared matrices with modality-specific parameters, which preserves discriminative capability while reducing redundancy. LiDAR elevation information is used as a positional guide to enhance spatial awareness during feature fusion. The selective scanning mechanism of the SSM enables efficient modeling of long-range dependencies with linear complexity, thereby avoiding the quadratic cost associated with Transformers. Spectral bands are processed sequentially to preserve joint spectral spatial characteristics. Finally, a MultiLayer Perceptron (MLP)-based classifier maps fused high-level features to class probabilities with low computational overhead. The model is trained end to end using cross-entropy loss. Evaluations are conducted on two public benchmarks, namely the Houston and Augsburg datasets. Comparisons are performed against representative methods, including CoupledCNN, GAMF, HCT, MFT, Cross-HL, and S2CrossMamba, using Overall Accuracy (OA), Average Accuracy (AA), and the Kappa coefficient. Ablation experiments analyze the contribution of each module, and parameter count and FLoating-Point Operations (FLOPs) are reported.  Results and Discussions  Experimental results demonstrate that LightMamba achieves superior performance and efficiency. On the Houston dataset, an OA of 94.30%, an AA of 95.25%, and a Kappa coefficient of 93.83% are obtained, which exceed those of all comparison methods. Perfect classification accuracy is achieved for several classes, including Soil and Water. Classification maps exhibit improved spatial continuity and internal consistency, with reduced speckle noise, particularly in heterogeneous regions such as commercial areas. On the Augsburg dataset, LightMamba achieves the highest OA of 87.41% and a Kappa coefficient of 82.30%, which confirms strong generalization across different scenes. Although the AA is slightly lower than that of S2CrossMamba, the higher OA and Kappa values indicate better overall performance. Complexity analysis shows that LightMamba attains high accuracy with a lightweight structure containing only 69.93 k parameters, which is substantially fewer than GAMF and comparable to S2CrossMamba, while maintaining moderate FLOPs. Experiments on input patch size indicate adaptability to scene characteristics, with optimal performance observed at 17×17 for the Houston dataset and 9×9 for the Augsburg dataset.  Conclusions  A lightweight network architecture, LightMamba, is presented for joint HSI and LiDAR classification. By combining a shared-weight MSAM with a lightweight Mamba module that adopts hybrid parameterization and elevation-guided fusion, modal heterogeneity is effectively addressed and long-range contextual dependencies are captured with linear computational complexity. Experimental results on public benchmarks demonstrate state-of-the-art classification accuracy with a reduced parameter count and computational cost compared with existing methods. These findings confirm the potential of Mamba-based architectures for efficient multi-source remote sensing data fusion. Future research will explore optimized two-dimensional scanning mechanisms and adaptive scanning strategies to further improve feature capture efficiency and classification performance. The LightMamba code is available at https://www.scidb.cn/detail?dataSetId=064dc4ac5350418e87a8b82dd324737b&version=V1&code=j00173.
  • loading
  • [1]
    FAUVEL M, TARABALKA Y, BENEDIKTSSON J A, et al. Advances in spectral-spatial classification of hyperspectral images[J]. Proceedings of the IEEE, 2013, 101(3): 652–675. doi: 10.1109/jproc.2012.2197589.
    [2]
    ZHANG Xia, SUN Yanli, SHANG Kun, et al. Crop classification based on feature band set construction and object-oriented approach using hyperspectral images[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2016, 9(9): 4117–4128. doi: 10.1109/jstars.2016.2577339.
    [3]
    GHAMISI P, PLAZA J, CHEN Yushi, et al. Advanced spectral classifiers for hyperspectral images: A review[J]. IEEE Geoscience and Remote Sensing Magazine, 2017, 5(1): 8–32. doi: 10.1109/mgrs.2016.2616418.
    [4]
    PRICOPE N G, HALLS J N, DALTON E G, et al. Precision mapping of coastal wetlands: An integrated remote sensing approach using unoccupied aerial systems light detection and ranging and multispectral data[J]. Journal of Remote Sensing, 2024, 4: 0169. doi: 10.34133/remotesensing.0169.
    [5]
    SHI Cuiping, LIAO Diling, ZHANG Tianyu, et al. Hyperspectral image classification based on expansion convolution network[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5528316. doi: 10.1109/tgrs.2022.3174015.
    [6]
    LIU Wenping, ZHANG Yuxiang, and DONG Yanni. Multifeature collaborative attention dynamic hypergraph convolutional network for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2025, 63: 5522115. doi: 10.1109/tgrs.2025.3598375.
    [7]
    LIU Quanyong, PENG Jiangtao, ZHANG Genwei, et al. Deep contrastive learning network for small-sample hyperspectral image classification[J]. Journal of Remote Sensing, 2023, 3: 0025. doi: 10.34133/remotesensing.0025.
    [8]
    ZHAO Xudong, TAO Ran, LI Wei, et al. Joint classification of hyperspectral and LiDAR data using hierarchical random walk and deep CNN architecture[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(10): 7355–7370. doi: 10.1109/tgrs.2020.2982064.
    [9]
    LI Zhi, ZHENG Ke, GAO Lianru, et al. Feature reconstruction guided fusion network for hyperspectral and LiDAR classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2025, 63: 4408914. doi: 10.1109/tgrs.2025.3562246.
    [10]
    GAO Hongmin, YANG Yao, LI Chenming, et al. Multiscale residual network with mixed depthwise convolution for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(4): 3396–3408. doi: 10.1109/tgrs.2020.3008286.
    [11]
    HONG Danfeng, HAN Zhu, YAO Jing, et al. SpectralFormer: Rethinking hyperspectral image classification with transformers[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 5518615. doi: 10.1109/tgrs.2021.3130716.
    [12]
    WANG Minhui, SUN Yaxiu, XIANG Jianhong, et al. CITNet: Convolution interaction transformer network for hyperspectral and LiDAR image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5535918. doi: 10.1109/tgrs.2024.3477965.
    [13]
    PAN Yukai, WU Nan, and JIN Wei. Multimodal feature disentangle-fusion network for hyperspectral and LiDAR data classification[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 5510905. doi: 10.1109/lgrs.2024.3492252.
    [14]
    ZHU Fei, SHI Cuiping, SHI Kaijie, et al. Joint classification of hyperspectral and LiDAR data using hierarchical multimodal feature aggregation-based multihead axial attention transformer[J]. IEEE Transactions on Geoscience and Remote Sensing, 2025, 63: 5503817. doi: 10.1109/tgrs.2025.3533475.
    [15]
    GU A, GOEL K, and RÉ C. Efficiently modeling long sequences with structured state spaces[C]. Proceedings of the 10th International Conference on Learning Representations, 2022.
    [16]
    HE Yan, TU Bingtu, LIU Bo, et al. HSI-MFormer: Integrating mamba and transformer experts for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2025, 63: 5621916. doi: 10.1109/tgrs.2025.3564167.
    [17]
    ZHUANG Peixian, ZHANG Xiaochen, WANG Hao, et al. FAHM: Frequency-aware hierarchical mamba for hyperspectral image classification[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2025, 18: 6299–6313. doi: 10.1109/jstars.2025.3539791.
    [18]
    LIAO Diling, WANG Qingsong, LAI Tao, et al. Joint classification of hyperspectral and LiDAR data based on mamba[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5530915. doi: 10.1109/tgrs.2024.3459709.
    [19]
    HE Yan, TU Bing, JIANG Puzhao, et al. Classification of multisource remote sensing data using slice mamba[J]. IEEE Transactions on Geoscience and Remote Sensing, 2025, 63: 5505414. doi: 10.1109/tgrs.2025.3538553.
    [20]
    ZHANG Guanglian, ZHANG Zhanxu, DENG Jiangwei, et al. S2CrossMamba: Spatial–spectral cross-mamba for multimodal remote sensing image classification[J]. IEEE Geoscience and Remote Sensing Letters, 2024, 21: 5510705. doi: 10.1109/lgrs.2024.3488036.
    [21]
    HANG Renlong, LI Zhu, GHAMISI P, et al. Classification of hyperspectral and LiDAR data using coupled CNNs[J]. IEEE Transactions on Geoscience and Remote Sensing, 2020, 58(7): 4939–4950. doi: 10.1109/tgrs.2020.2969024.
    [22]
    CAI Jianghui, ZHANG Min, YANG Haifeng, et al. A novel graph-attention based multimodal fusion network for joint classification of hyperspectral image and LiDAR data[J]. Expert Systems with Applications, 2024, 249: 123587. doi: 10.1016/j.eswa.2024.123587.
    [23]
    ZHAO Guangrui, YE Qiaolin, SUN Len, et al. Joint classification of hyperspectral and LiDAR data using a hierarchical CNN and transformer[J]. IEEE Transactions on Geoscience and Remote Sensing, 2023, 61: 5500716. doi: 10.1109/tgrs.2022.3232498.
    [24]
    HOFFMANN D S, CLASEN K N, and DEMIR B. Transformer-based multi-modal learning for multi-label remote sensing image classification[C]. IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, USA, 2023: 4891–4894. doi: 10.1109/igarss52108.2023.10281927.
    [25]
    ROY S K, SUKUL A, JAMALI A, et al. Cross hyperspectral and LiDAR attention transformer: An extended self-attention for land use and land cover classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2024, 62: 5512815. doi: 10.1109/tgrs.2024.3374324.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(7)  / Tables(4)

    Article Metrics

    Article views (83) PDF downloads(7) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return