A multimodal corpus developed for research of speech recognition based on audio-visual data is presented. Besides usual video and sound excerpts, the prepared database contains also thermovision images and depth maps. All streams were recorded simultaneously, therefore the corpus enables to examine the importance of the information provided by different modalities. Based on the recordings, it is also possible to develop a speech recognition system which analyzes many modalities at the same time. The paper describes the process of multimodal material collection and the post-processing procedure applied to this material. Parameterization methods of signals belonging to different modalities are also proposed.
Authors
- dr inż. Bartosz Kunka link open in new tab ,
- mgr inż. Adam Kupryjanow link open in new tab ,
- mgr inż. Piotr Dalka link open in new tab ,
- mgr inż. Piotr Bratoszewski link open in new tab ,
- dr inż. Maciej Szczodrak link open in new tab ,
- mgr inż. Paweł Spaleniak link open in new tab ,
- mgr inż. Marcin Szykulski link open in new tab ,
- prof. dr hab. inż. Andrzej Czyżewski link open in new tab
Additional information
- Category
- Aktywność konferencyjna
- Type
- materiały konferencyjne indeksowane w Web of Science
- Language
- angielski
- Publication year
- 2013