Li Yan-Xiong, He Qian-Hua, Chen Nan, Qi Chao-Hui. Spectral Stability Feature Based Novel Method for Discriminating Speech and Laughter[J]. Journal of Electronics & Information Technology, 2008, 30(6): 1359-1362. doi: 10.3724/SP.J.1146.2007.00745
Citation:
Li Yan-Xiong, He Qian-Hua, Chen Nan, Qi Chao-Hui. Spectral Stability Feature Based Novel Method for Discriminating Speech and Laughter[J]. Journal of Electronics & Information Technology, 2008, 30(6): 1359-1362. doi: 10.3724/SP.J.1146.2007.00745
Li Yan-Xiong, He Qian-Hua, Chen Nan, Qi Chao-Hui. Spectral Stability Feature Based Novel Method for Discriminating Speech and Laughter[J]. Journal of Electronics & Information Technology, 2008, 30(6): 1359-1362. doi: 10.3724/SP.J.1146.2007.00745
Citation:
Li Yan-Xiong, He Qian-Hua, Chen Nan, Qi Chao-Hui. Spectral Stability Feature Based Novel Method for Discriminating Speech and Laughter[J]. Journal of Electronics & Information Technology, 2008, 30(6): 1359-1362. doi: 10.3724/SP.J.1146.2007.00745
This paper proposes a novel method which uses spectral stability as feature parameter to discriminate speech and laugh. It is found that the spectral stability of speech is obviously smaller than that of laugh, which indicates that the spectral stability can be used as a feature parameter to discriminate speech and laugh. The performance of discriminating speech and laugh by using Spectral Stability (SS), Mel-Frequency Cepstrum Coefficients (MFCC), Perceptual Linear Prediction (PLP) and pitch, are compared to each other in the same experiment conditions. The experiment results show that the accuracy are respectively 90.74% and 73.63% by using spectral stability as feature parameter to discriminate speech and laugh in the speaker-dependent and speaker-independent conditions, and the discrimination power of spectral stability is superior to the counterparts of other feature parameters.