高级搜索

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于跨领域卷积稀疏自动编码器的抽象图像情绪性分类

樊养余 李祖贺 王凤琴 马江涛

樊养余, 李祖贺, 王凤琴, 马江涛. 基于跨领域卷积稀疏自动编码器的抽象图像情绪性分类[J]. 电子与信息学报, 2017, 39(1): 167-175. doi: 10.11999/JEIT160241
引用本文: 樊养余, 李祖贺, 王凤琴, 马江涛. 基于跨领域卷积稀疏自动编码器的抽象图像情绪性分类[J]. 电子与信息学报, 2017, 39(1): 167-175. doi: 10.11999/JEIT160241
FAN Yangyu, LI Zuhe, WANG Fengqin, MA Jiangtao. Affective Abstract Image Classification Based on Convolutional Sparse Autoencoders across Different Domains[J]. Journal of Electronics & Information Technology, 2017, 39(1): 167-175. doi: 10.11999/JEIT160241
Citation: FAN Yangyu, LI Zuhe, WANG Fengqin, MA Jiangtao. Affective Abstract Image Classification Based on Convolutional Sparse Autoencoders across Different Domains[J]. Journal of Electronics & Information Technology, 2017, 39(1): 167-175. doi: 10.11999/JEIT160241

基于跨领域卷积稀疏自动编码器的抽象图像情绪性分类

doi: 10.11999/JEIT160241
基金项目: 

陕西省科技统筹创新工程重点实验室项目(2013 SZS15-K02)

Affective Abstract Image Classification Based on Convolutional Sparse Autoencoders across Different Domains

Funds: 

The Science and Technology Innovation Engineering Program for Shaanxi Key Laboratories (2013SZS15-K02)

  • 摘要: 为了将无监督特征学习应用于小样本量的图像情绪语义分析,该文采用一种基于卷积稀疏自动编码器进行自学习的领域适应方法对少量有标记抽象图像进行情绪性分类。并且提出了一种采用平均梯度准则对自动编码器所学权重进行排序的方法,用于对基于不同领域的特征学习结果进行直观比较。首先在源领域中的大量无标记图像上随机采集图像子块并利用稀疏自动编码器学习局部特征,然后将对应不同特征的权重矩阵按照每个矩阵在3个色彩通道上的平均梯度中的最小值进行排序。最后采用包含池化层的卷积神经网络提取目标领域有标记图像样本的全局特征响应,并送入逻辑回归模型进行情绪性分类。实验结果表明基于自学习的领域适应可以为无监督特征学习在有限样本目标领域上的应用提供训练数据,而且采用稀疏自动编码器的跨领域特征学习能在有限数量抽象图像情绪语义分析中获得比底层视觉特征更优秀的辨识效果。
  • BORTH D, JI R, CHEN T, et al. Large-scale visual sentiment ontology and detectors using adjective noun pairs[C]. 21st ACM International Conference on Multimedia, Barcelona, Spain, 2013: 223-232. doi: 10.1145/2502081.2502282.
    李祖贺, 樊养余. 基于视觉的情感分析研究综述[J]. 计算机应用研究, 2015, 32(12): 3521-3526. doi: 10.3969/j.issn.1001- 3695.2015.12.001.
    LI Zuhe and FAN Yangyu. Survey on visual sentiment analysis[J]. Application Research of Computers, 2015, 32(12): 3521-3526. doi: 10.3969/j.issn.1001-3695.2015.12.001.
    MACHAJDIK J and HANBURY A. Affective image classification using features inspired by psychology and art theory[C]. 18th ACM International Conference on Multimedia, Firenze, Italy, 2010: 83-92. doi: 10.1145/ 1873951.1873965.
    ZHANG H, G?NEN M, YANG Z, et al. Understanding emotional impact of images using Bayesian multiple kernel learning[J]. Neurocomputing, 2015, 165: 3-13. doi: 10.1016/ j.neucom.2014.10.093.
    ZHAO S, GAO Y, JIANG X, et al. Exploring principles-of-art features for image emotion recognition[C]. 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 2014: 47-56. doi: 10.1145/2647868.2654930.
    ZHANG H, YANG Z, G?NEN M, et al. Affective abstract image classification and retrieval using multiple kernel learning[C]. 20th International Conference on Neural Information Processing, Daegu, South Korea, 2013: 166-175. doi: 10.1007/978-3-642-42051-1_22.
    ZHANG H, AUGILIUS E, HONKELA T, et al. Analyzing emotional semantics of abstract art using low-level image features[C]. 10th International Symposium on Intelligent Data Analysis, Porto, Portugal, 2011: 413-423. doi: 10.1007/ 978-3-642-24800-9_38.
    LECUN Y, BENGIO Y, and HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444. doi: 10.1038/nature14539.
    李寰宇, 毕笃彦, 查宇飞, 等. 一种易于初始化的类卷积神经网络视觉跟踪算法[J]. 电子与信息学报, 2016, 38(1): 1-7. doi: 10.11999/JEIT150600.
    LI Huanyu, BI Duyan, ZHA Yufei, et al. An easily initialized visual tracking algorithm based on similar structure for convolutional neural network[J]. Journal of Electronics Information Technology, 2016, 38(1): 1-7. doi: 10.11999/ JEIT150600.
    YOU Q, LUO J, JIN H, et al. Robust image sentiment analysis using progressively trained and domain transferred deep networks[C]. 29th AAAI Conference on Artificial Intelligence (AAAI), Austin, TX, USA, 2015: 381-388.
    李祖贺, 樊养余, 王凤琴. YUV空间中基于稀疏自动编码器的无监督特征学习[J]. 电子与信息学报, 2016, 38(1): 29-37. doi: 10.11999/JEIT150557.
    LI Zuhe, FAN Yangyu, and WANG Fengqin. Unsupervised feature learning with sparse autoencoders in YUV space[J]. Journal of Electronics Information Technology, 2016, 38(1): 29-37. doi: 10.11999/JEIT150557.
    ZHANG F, DU B, and ZHANG L. Saliency-guided unsupervised feature learning for scene classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2015, 53(4): 2175-2184. doi: 10.1109/TGRS.2014.2357078.
    杨兴明, 吴克伟, 孙永宣, 等. 可迁移测度准则下的协变量偏移修正多源集成方法[J]. 电子与信息学报, 2015, 37(12): 2913-2920. doi: 10.11999/JEIT150323.
    YANG Xingming, WU Kewei, SUN Yongxuan, et al. Modified covariate-shift multi-source ensemble method in transferability metric[J]. Journal of Electronics Information Technology, 2015, 37(12): 2913-2920. doi: 10.11999/JEIT150323.
    庄福振, 罗平, 何清, 等. 迁移学习研究进展[J]. 软件学报, 2015, 26(1): 26-39. doi: 10.13328/j.cnki.jos.004631.
    ZHUANG Fuzhen, LUO Ping, HE Qing, et al. Survey on transfer learning research[J]. Journal of Software, 2015, 26(1): 26-39. doi: 10.13328/j.cnki.jos.004631.
    DENG J, ZHANG Z, EYBEN F, et al. Autoencoder-based unsupervised domain adaptation for speech emotion recognition[J]. IEEE Signal Processing Letters, 2014, 21(9): 1068-1072. doi: 10.1109/LSP.2014.2324759.
    YANG X, ZHANG T, and XU C. Cross-domain feature learning in multimedia [J]. IEEE Transactions on Multimedia, 2015, 17(1): 64-78. doi: 10.1109/TMM.2014.2375793.
    ZHOU J T, PAN S J, TSANG I W, et al. Hybrid heterogeneous transfer learning through deep learning[C]. 28th AAAI Conference on Artificial Intelligence (AAAI), Quebec City, QC, Canada, 2014: 2213-2219.
    KOUNO K, SHINNOU H, SASAKI M, et al. Unsupervised domain adaptation for word sense disambiguation using stacked denoising autoencoder[C]. 29th Pacific Asia Conference on Language, Information and Computation (PACLIC 29), Shanghai, China, 2015: 224-231.
    COATES A, LEE H, and NG A Y. An analysis of single-layer networks in unsupervised feature learning[C]. 14th International Conference on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 2011: 215-223.
    WANG R, DU L, YU Z, et al. Infrared and visible images fusion using compressed sensing based on average gradient[C]. 2013 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), San Jose, CA, USA, 2013: 1-4. doi: 10.1109/ICMEW.2013.6618257.
    L?NGKVIST M and LOUTFI A. Learning feature representations with a cost-relevant sparse autoencoder[J]. International Journal of Neural Systems, 2015, 25(1): 1-11. doi: 10.1142/S0129065714500348.
    LI Z, FAN Y, and LIU W. The effect of whitening transformation on pooling operations in convolutional autoencoders[J]. EURASIP Journal on Advances in Signal Processing, 2015, 2015(1): 1-11. doi: 10.1186/s13634-015- 0222-1.
    VEDALDI A and LENC K. MatConvNet: convolutional neural networks for matlab[C]. 23rd ACM International Conference on Multimedia, Brisbane, Australia, 2015: 689-692. doi: 10.1145/2733373.2807412.
  • 加载中
计量
  • 文章访问数:  1514
  • HTML全文浏览量:  208
  • PDF下载量:  451
  • 被引次数: 0
出版历程
  • 收稿日期:  2016-03-17
  • 修回日期:  2016-07-22
  • 刊出日期:  2017-01-19

目录

    /

    返回文章
    返回