To this day, driver fatigue remains one of the most significant causes of road accidents. In this paper, a novel way of detecting and monitoring a driver’s physical state has been proposed. The goal of the system was to make use of multimodal imaging from RGB and thermal cameras working simultaneously to monitor the driver’s current condition. A custom dataset was created consisting of thermal and RGB video samples. Acquired data was further processed and used for the extraction of necessary metrics pertaining to the state of the eyes and mouth, such as the eye aspect ratio (EAR) and mouth aspect ratio (MAR), respectively. Breath characteristics were also measured. A customized residual neural network was chosen as the final prediction model for the entire system. The results achieved by the proposed model validate the chosen approach to fatigue detection by achieving an average accuracy of 75% on test data
Autorzy
Informacje dodatkowe
- DOI
- Cyfrowy identyfikator dokumentu elektronicznego link otwiera się w nowej karcie 10.1007/978-3-031-43078-7_6
- Kategoria
- Aktywność konferencyjna
- Typ
- publikacja w wydawnictwie zbiorowym recenzowanym (także w materiałach konferencyjnych)
- Język
- angielski
- Rok wydania
- 2023