Advanced Search
Volume 40 Issue 1
Jan.  2018
Turn off MathJax
Article Contents
LIU Chang, ZHANG Yike, ZHANG Pengyuan, YAN Yonghong. Neural Network Language Modeling Using an Improved Topic Distribution Feature[J]. Journal of Electronics & Information Technology, 2018, 40(1): 219-225. doi: 10.11999/JEIT170219
Citation: LIU Chang, ZHANG Yike, ZHANG Pengyuan, YAN Yonghong. Neural Network Language Modeling Using an Improved Topic Distribution Feature[J]. Journal of Electronics & Information Technology, 2018, 40(1): 219-225. doi: 10.11999/JEIT170219

Neural Network Language Modeling Using an Improved Topic Distribution Feature

doi: 10.11999/JEIT170219
Funds:

The National Natural Science Foundation of China (11590770-4, U1536117, 11504406, 11461141004), The National Key Research and Development Plan (2016YFB0801203, 2016YFB0801200), The Key Science and Technology Project of the Xinjiang Uygur Autonomous Region (2016A03007-1)

  • Received Date: 2017-03-17
  • Rev Recd Date: 2017-10-06
  • Publish Date: 2018-01-19
  • Attaching topic features to the input of Recurrent Neural Network (RNN) models is an efficient method to leverage distant contextual information. To cope with the problem that the topic distributions may vary greatly among different documents, this paper proposes an improved topic feature using the topic distributions of documents and applies it to a recurrent Long Short-Term Memory (LSTM) language model. Experiments show that the proposed feature achieved an 11.8% relatively perplexity reduction on the Penn TreeBank (PTB) dataset, and reached 6.0% and 6.8% relative Word Error Rate (WER) reduction on the SWitch BoarD (SWBD) and Wall Street Journal (WSJ) speech recognition task respectively. On WSJ speech recognition task, RNN with this feature can reach the effect of LSTM on eval92 testset.
  • loading
  • MIKOLOV T, KARAFIT M, BURGET L, et al. Recurrent neural network based language model[C]. INTERSPEECH, Makuhari, Chiba, Japan, 2010: 1045-1048.
    MIKOLOV T, JOULIN A, CHOPRA S, et al. Learning longer memory in recurrent neural networks[OL]. https:// arxiv.org/abs/1412.7753v22014.
    MEDENNIKOV I and BULUSHEVA A. LSTM-based language models for spontaneous speech recognition[C]. International Conference on Speech and Computer, Athens, Greece, 2016: 469-475.
    HUANG Z, ZWEIG G, and DUMOULIN B. Cache based recurrent neural network language model inference for first pass speech recognition[C]. IEEE International Conference on Acoustics, Speech and Signal Processing, Florence, Italy, 2014: 6354-6358.
    COCCARO N and JURAFSKY D. Towards better integration of semantic predictors in statistical language modeling[C]. International Conference on Spoken Language Processing, Sydney, Australia, 1998: 2403-2406.
    KHUDANPUR S and WU J. Maximum entropy techniques for exploiting syntactic, semantic and collocational dependencies in language modeling[J]. Computer Speech Language, 2000, 14(4): 355-372.
    LAU R, ROSENFELD R, and ROUKOS S. Trigger-based language models: A maximum entropy approach[C]. IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, Florida, USA, 2002: 45-48.
    ECHEVERRY-CORREA J D, FERREIROS-LPEZ J, COUCHEIRO-LIMERES A, et al. Topic identification techniques applied to dynamic language model adaptation for automatic speech recognition[J]. Expert Systems with Applications, 2015, 42(1): 101-112.
    MIKOLOV T and ZWEIG G. Context dependent recurrent neural network language model[C]. Spoken Language Technology Workshop, Miami, Florida, USA, 2012: 234-239.
    张剑, 屈丹, 李真. 基于词向量特征的循环神经网络语言模型[J]. 模式识别与人工智能, 2015, (4): 299-305. doi: 10.16451 /j.cnki.issn1003-6059.201504002.
    ZHANG Jian, QU Dan, and LI Zhen. Recurrent neural network language model based on word vector features[J]. Pattern Recognition and Artificial Intelligence, 2015, (4): 299-305. doi: 10.16451/j.cnki.issn1003-6059.201504002.
    GONG C, LI X, and WU X. Recurrent neural network language model with part-of-speech for Mandarin speech recognition[C]. International Symposium on Chinese Spoken Language Processing, Singapore, 2014: 459-463.
    左玲云, 张晴晴, 黎塔, 等. 电话交谈语音识别中基于LSTM-DNN语言模型的重评估方法研究[J]. 重庆邮电大学学报(自然科学版), 2016, 28(2): 180-186. doi: 10.3979/j.issn. 1673-825X.2016.02.007.
    ZUO Lingyun, ZHANG Qingqing, LI Ta, et al. Revaluation based on LSTM DNN language model in telephone conversation sqeech recognition[J]. Journal of Chongqing University of Post and Telecomunications, 2016, 28(2): 180-186. doi: 10.3979/j.issn.1673-825X.2016.02.007.
    王龙, 杨俊安, 陈雷, 等. 基于循环神经网络的汉语语言模型并行优化算法[J]. 应用科学学报, 2015, 33(3): 253-261. doi: 10.3969/j.issn.0255-8297.2015.03.004.
    WANG Long, YANG Junan, CHEN Lei, et al. Parallel optimization of chinese language model based on recurrent neural network[J]. Journal of Applied Sciences, 2015, 33(3): 253-261. doi: 10.3969/j.issn.0255-8297.2015.03.004.
    PIOTR Bojanowski, EDOUARD Grave, ARMAND Joulin, et al. Enriching word vectors with subword information[OL]. https://arxiv.org/abs/1607.04606v2.
    GANGULY D, ROY D, MITRA M, et al. Word embedding based generalized language model for information retrieval[C]. The International ACM SIGIR Conference, Santiago, Chile, 2015: 795-798.
    LI X. Recurrent neural network training with preconditioned stochastic gradient descent[OL]. https://arxiv.org/abs/1606. 04449v2, 2016.
    BLEI D M, NG A Y, and JORDAN M I. Latent dirichlet allocation[J]. Journal of Machine Learning Research, 2003, 3: 993-1022.
    BHUTADA S, BALARAM V V S S S, and BULUSU V V. Semantic latent dirichlet allocation for automatic topic extraction[J]. Journal of Information Optimization Sciences, 2016, 37(3): 449-469.
    MARCUS M P, MARCINKIEWICZ M A, and SANTORINI B. Building a large annotated corpus of English: the penn treebank[J]. Computational Linguistics, 1993, 19(2): 313-330.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Article Metrics

    Article views (1374) PDF downloads(322) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return