The Contextual Multi-Armed Bandits (CMAB) framework is pivotal for learning to make decisions. However, due to challenges in deploying online algorithms, there is a shift towards offline policy learning, which relies on pre-existing datasets. This study examines the relationship between the quality of these datasets and the performance of offline policy learning algorithms, specifically, Neural Greedy and NeuraLCB. Our results demonstrate that NeuraLCB can learn from various datasets, while Neural Greedy necessitates extensive coverage of the action-space for effective learning. Moreover, the way data is collected significantly affects offline methods’ efficiency. This underscores the critical role of dataset quality in offline policy learning.
Autorzy
Informacje dodatkowe
- DOI
- Cyfrowy identyfikator dokumentu elektronicznego link otwiera się w nowej karcie 10.5220/0012311000003636
- Kategoria
- Aktywność konferencyjna
- Typ
- publikacja w wydawnictwie zbiorowym recenzowanym (także w materiałach konferencyjnych)
- Język
- angielski
- Rok wydania
- 2024