Research for Automatic Short Answer Scoring in Spoken English Test Based on Multiple Features
-
摘要: 该文主要针对大规模英语口语考试自动评分系统的问答题型,采用多特征融合的方法进行评分。以语音识别文本作为研究对象,提取了3类特征进行评分。这3类特征分别是:相似度特征、句法特征和语音特征。总共9个特征从不同方面描述了考生回答与专家评分之间的关系。在相似度特征中,改进了Manhattan距离作为相似度。同时提出了基于编辑距离的关键词覆盖率的特征,充分考虑了识别文本中存在的单词变异现象,为给考生一个客观公平的分数提供依据。所有提取的特征利用多元线性回归模型进行融合,得到机器评分。实验结果表明,提取的特征对机器评分是十分有效的,并且在以考生为单位的系统评分性能达到了专家评分性能的98.4%。Abstract: This paper focuses on automatic scoring about ask-and-answer item in large scale of spoken English test. Three kinds of features are extracted to score based on the text from Automatic Speech Recognition (ASR). They are similarity features, parser features and features about speech. All of nine features describe the relation with human raters from different aspects. Among features of similarity measure, Manhattan distance is converted into similarity to improve the performance of scoring. Furthermore, keywords coverage rate based on edit distance is proposed to distinguish words variation in order to give students a more objective score. All of those features are put into multiple linear regression model to score. The experiment results show that performance of automatic scoring system based on speakers achieves 98.4% of human raters.
-
计量
- 文章访问数: 2926
- HTML全文浏览量: 146
- PDF下载量: 874
- 被引次数: 0