A method for automatic audiovisual transcription of speech employing: acoustic, electromagnetical articulography and visual speech representations is developed. It adopts a combining of audio and visual modalities, which provide a synergy effect in terms of speech recognition accuracy. To establish a robust solution, basic research concerning the relation between the allophonic variation of speech, i.e., the changes in the articulatory setting of speech organs for the same phoneme produced in different phonetic environments and the objective signal parameters (both audio and video) is carried out. The method is sensitive to minute allophonic detail as well as to accentual differences. It is shown that by using the analysis of video signals together with the acoustic signal, speech transcription can be performed more accurately and robustly than by using the acoustic modality alone. In particular, various features extracted from the visual signal are tested for their abilities to encode allophonic variations in pronunciation. New methods for modeling the accentual and allophonic variation of speech are developed.
Authors
Additional information
- DOI
- Digital Object Identifier link open in new tab 10.1121/1.4949947
- Category
- Publikacja w czasopiśmie
- Type
- artykuł w czasopiśmie wyróżnionym w JCR
- Language
- angielski
- Publication year
- 2016