Results of experiments regarding lip gesture recognition with an artificial neural network are discussed. The neural network module forms the core element of a multimodal human-computer interface called LipMouse. This solution allows a user to work on a computer using lip movements and gestures. A user face is detected in a video stream from a standard web camera using a cascade of boosted classifiers working with Haar-like features. Lip region extraction is based on a lip shape approximation calculated by the means of lip image segmentation using fuzzy clustering. ANN is fed with a feature vector describing lip region appearance. The descriptors used include a luminance histogram, statistical moments and co-occurrence matrices statistical parameters. ANN is able to recognize with a good accuracy three lip gestures: mouth opening, sticking out the tongue and forming puckered lips.
Autorzy
Informacje dodatkowe
- Kategoria
- Publikacja w czasopiśmie
- Typ
- artykuły w czasopismach recenzowanych i innych wydawnictwach ciągłych
- Język
- angielski
- Rok wydania
- 2010
Źródło danych: MOSTWiedzy.pl - publikacja "Controlling computer by lip gestures employing neural network" link otwiera się w nowej karcie