LI Yan-Ling, Yan Yong-Hong- . Research for Automatic Short Answer Scoring in Spoken English Test Based on Multiple Features[J]. Journal of Electronics & Information Technology, 2012, 34(9): 2097-2102. doi: 10.3724/SP.J.1146.2012.00172
Citation:
LI Yan-Ling, Yan Yong-Hong- . Research for Automatic Short Answer Scoring in Spoken English Test Based on Multiple Features[J]. Journal of Electronics & Information Technology, 2012, 34(9): 2097-2102. doi: 10.3724/SP.J.1146.2012.00172
LI Yan-Ling, Yan Yong-Hong- . Research for Automatic Short Answer Scoring in Spoken English Test Based on Multiple Features[J]. Journal of Electronics & Information Technology, 2012, 34(9): 2097-2102. doi: 10.3724/SP.J.1146.2012.00172
Citation:
LI Yan-Ling, Yan Yong-Hong- . Research for Automatic Short Answer Scoring in Spoken English Test Based on Multiple Features[J]. Journal of Electronics & Information Technology, 2012, 34(9): 2097-2102. doi: 10.3724/SP.J.1146.2012.00172
This paper focuses on automatic scoring about ask-and-answer item in large scale of spoken English test. Three kinds of features are extracted to score based on the text from Automatic Speech Recognition (ASR). They are similarity features, parser features and features about speech. All of nine features describe the relation with human raters from different aspects. Among features of similarity measure, Manhattan distance is converted into similarity to improve the performance of scoring. Furthermore, keywords coverage rate based on edit distance is proposed to distinguish words variation in order to give students a more objective score. All of those features are put into multiple linear regression model to score. The experiment results show that performance of automatic scoring system based on speakers achieves 98.4% of human raters.