Advanced Search
Volume 44 Issue 11
Nov.  2022
Turn off MathJax
Article Contents
DING Bo, FAN Yufei, GAO Yuan, HE Yongjun. 3D Model Classification Based on Viewpoint Differences and Multiple Classifiers[J]. Journal of Electronics & Information Technology, 2022, 44(11): 3977-3986. doi: 10.11999/JEIT210823
Citation: DING Bo, FAN Yufei, GAO Yuan, HE Yongjun. 3D Model Classification Based on Viewpoint Differences and Multiple Classifiers[J]. Journal of Electronics & Information Technology, 2022, 44(11): 3977-3986. doi: 10.11999/JEIT210823

3D Model Classification Based on Viewpoint Differences and Multiple Classifiers

doi: 10.11999/JEIT210823
Funds:  The National Natural Science Foundation of China (61673142), The Natural Science Foundation of Heilongjiang Province of China (JJ2019JQ0013)
  • Received Date: 2021-08-12
  • Accepted Date: 2022-03-31
  • Rev Recd Date: 2022-03-15
  • Available Online: 2022-04-10
  • Publish Date: 2022-11-14
  • The integration of view-based 3D model classification and deep learning can effectively improve the classification accuracy. However, current methods consider that the views from different viewpoints of 3D model with same category belong to the same category and ignore the view differences, which makes it difficult for the classifier to learn a reasonable classification surface. To solve this problem, a 3D model classification method based on deep neural network is proposed. The multiple viewpoint groups are set evenly around the 3D model in this method, and the view classifier for each viewpoint group is trained for fully mining the deep information of the 3D model in different viewpoint groups. These classifiers share a feature extraction network, but have their own classification network. In order to extract the discriminative view features, the attention mechanism is added to the feature extraction network; In order to model the views of the non-viewpoint group, additional classes are added to the classification network. In the classification stage, a view selection strategy is first proposed, which can use a small number of views to classify the 3D model and improve classification efficiency. Then a classification strategy is proposed to achieve reliable 3D model classification through classification view. Experimental results on ModelNet10 and ModelNet40 show that the classification accuracy can reach up to 93.6% and 91.0% with only 3 views.
  • loading
  • [1]
    韩丽, 刘书宁, 徐圣斯, 等. 自适应稀疏编码融合的非刚性三维模型分类算法[J]. 计算机辅助设计与图形学学报, 2019, 31(11): 1898–1907. doi: 10.3724/SP.J.1089.2019.17759

    HAN Li, LIU Shuning, XU Shengsi, et al. Non-rigid 3D model classification algorithm based on adaptive sparse coding fusion[J]. Journal of Computer-Aided Design &Computer Graphics, 2019, 31(11): 1898–1907. doi: 10.3724/SP.J.1089.2019.17759
    [2]
    周文, 贾金原. 一种SVM学习框架下的Web3D轻量级模型检索算法[J]. 电子学报, 2019, 47(1): 92–99. doi: 10.3969/j.issn.0372-2112.2019.01.012

    ZHOU Wen and JIA Jinyuan. Web3D lightweight for sketch-based shape retrieval using SVM learning algorithm[J]. Acta Electronica Sinica, 2019, 47(1): 92–99. doi: 10.3969/j.issn.0372-2112.2019.01.012
    [3]
    王栋. 面向三维模型检索的多视图特征学习方法研究[D]. [博士论文], 哈尔滨工业大学, 2019: 1–15.

    WANG Dong. Research on multi-view feature learning for 3D model retrieval[D]. [Ph. D. dissertation], Harbin Institute of Technology, 2019: 1–15.
    [4]
    SU Hang, MAJI S, KALOGERAKIS E, et al. Multi-view convolutional neural networks for 3D shape recognition[C]. 2015 IEEE International Conference on Computer Vision, Santiago, Chile, 2015: 945–953.
    [5]
    LIU Anan, GUO Fubin, ZHOU Heyu, et al. Semantic and context information fusion network for view-based 3D model classification and retrieval[J]. IEEE Access, 2020, 8: 155939–155950. doi: 10.1109/ACCESS.2020.3018875
    [6]
    GAO Zan, XUE Haixin, and WAN Shaohua. Multiple discrimination and pairwise CNN for view-based 3D object retrieval[J]. Neural Networks, 2020, 125: 290–302. doi: 10.1016/j.neunet.2020.02.017
    [7]
    HEGDE V and ZADEH R. FusionNet: 3D object classification using multiple data representations[EB/OL]. https://arxiv.org/abs/1607.05695, 2016.
    [8]
    LIU Anan, ZHOU Heyu, LI Mengjie, et al. 3D model retrieval based on multi-view attentional convolutional neural network[J]. Multimedia Tools and Applications, 2020, 79(7-8): 4699–4711. doi: 10.1007/s11042-019-7521-8
    [9]
    LIANG Qi, WANG Yixin, NIE Weizhi, et al. MVCLN: Multi-view convolutional LSTM network for cross-media 3D shape recognition[J]. IEEE Access, 2020, 8: 139792–139802. doi: 10.1109/ACCESS.2020.3012692
    [10]
    MA Yanxun, ZHENG Bin, GUO Yulan, et al. Boosting multi-view convolutional neural networks for 3D object recognition via view saliency[C]. The 12th Chinese Conference on Image and Graphics Technologies, Beijing, China, 2017: 199–209.
    [11]
    白静, 司庆龙, 秦飞巍. 基于卷积神经网络和投票机制的三维模型分类与检索[J]. 计算机辅助设计与图形学学报, 2019, 31(2): 303–314. doi: 10.3724/SP.J.1089.2019.17160

    BAI Jing, SI Qinglong, and QIN Feiwei. 3D model classification and retrieval based on CNN and voting scheme[J]. Journal of Computer-Aided Design &Computer Graphics, 2019, 31(2): 303–314. doi: 10.3724/SP.J.1089.2019.17160
    [12]
    KANEZAKI A, MATSUSHITA Y, and NISHIDA Y. RotationNet: Joint object categorization and pose estimation using multiviews from unsupervised viewpoints[C]. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, USA, 2018: 5010–5019.
    [13]
    SHI Baoguang, BAI Song, ZHOU Zhichao, et al. DeepPano: Deep panoramic representation for 3-D shape recognition[J]. IEEE Signal Processing Letters, 2015, 22(12): 2339–2343. doi: 10.1109/LSP.2015.2480802
    [14]
    SINHA A, BAI Jing, and RAMANI K. Deep learning 3D shape surfaces using geometry images[C]. The 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 2016: 223–240.
    [15]
    SFIKAS K, THEOHARIS T, and PRATIKAKIS I. Exploiting the PANORAMA representation for convolutional neural network classification and retrieval[C]. The 10th Eurographics Workshop on 3D Object Retrieval, Lyon, France, 2017: 1–7.
    [16]
    HAN Zhizhong, SHANG Mingyang, LIU Zhenbao, et al. SeqViews2SeqLabels: Learning 3D global features via aggregating sequential views by RNN with attention[J]. IEEE Transactions on Image Processing, 2019, 28(2): 658–672. doi: 10.1109/TIP.2018.2868426
    [17]
    WOO S, PARK J, LEE J Y, et al. Cbam: Convolutional block attention module[C]. The 15th European Conference on Computer Vision, Munich, Germany, 2018: 3–19.
    [18]
    WOO S M, LEE S H, YOO J S, et al. Improving color constancy in an ambient light environment using the phong reflection model[J]. IEEE Transactions on Image Processing, 2018, 27(4): 1862–1877. doi: 10.1109/TIP.2017.2785290
    [19]
    SHILANE P, MIN P, KAZHDAN M, et al. The Princeton shape benchmark[C]. Shape Modeling Applications, 2004, Genova, Italy, 2004: 167–178.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(10)  / Tables(4)

    Article Metrics

    Article views (532) PDF downloads(70) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return