Publication Details

Multimodal Phoneme Recognition of Meeting Data

MOTLÍČEK, P., ČERNOCKÝ, J. Multimodal Phoneme Recognition of Meeting Data. Lecture Notes in Computer Science, 2004, vol. 2004, no. 3206, p. 379 ( p.)ISSN: 0302-9743.
Czech title
Multimodální rozpoznávání fonémů na meeting datech
Type
journal article
Language
English
Authors
URL
Keywords

speech processing, audio-video processing, feature extraction, pattern recognition

Abstract

This paper describes experiments in automatic recognition of context-independent phoneme strings from meeting data using audio-visual features. Visual features are known to improve accuracy and noise robustness of automatic speech recognizers. However, many problems appear when not "visually clean'' data is provided, such as data without limited variation in the speaker's frontal pose, lighting conditions, background, etc. The goal of this work was to test whether visual information can be helpful for recognition of phonemes using neural nets. While the audio part is fixed and uses standard Mel filter-bank energies, different features describing the video were tested: average brightness, DCT coefficients extracted from region-of-interest (ROI), optical flow analysis and lip-position features. The recognition was evaluated on a sub-set of IDIAP meeting room data. We have seen small improvement when compared to purely audio-recognition, but further work needs to be done especially concerning the determination of reliability of video features.

Annotation

This paper describes experiments in automatic recognition of context-independent phoneme strings from meeting data using audio-visual features. Visual features are known to improve accuracy and noise robustness of automatic speech recognizers. However, many problems appear when not "visually clean'' data is provided, such as data without limited variation in the speaker's frontal pose, lighting conditions, background, etc. The goal of this work was to test whether visual information can be helpful for recognition of phonemes using neural nets. While the audio part is fixed and uses standard Mel filter-bank energies, different features describing the video were tested: average brightness, DCT coefficients extracted from region-of-interest (ROI), optical flow analysis and lip-position features. The recognition was evaluated on a sub-set of IDIAP meeting room data. We have seen small improvement when compared to purely audio-recognition, but further work needs to be done especially concerning the determination of reliability of video features.

Published
2004
Pages
6
Journal
Lecture Notes in Computer Science, vol. 2004, no. 3206, ISSN 0302-9743
Book
Lecture Notes in Computer Science
BibTeX
@article{BUT45741,
  author="Petr {Motlíček} and Jan {Černocký}",
  title="Multimodal Phoneme Recognition of Meeting Data",
  journal="Lecture Notes in Computer Science",
  year="2004",
  volume="2004",
  number="3206",
  pages="6",
  issn="0302-9743",
  url="http://www.springerlink.com/index/U0DJ8GHXE220LX81"
}
Back to top