In this paper, adaptive hyperparameter optimization (HPO) strategies within the efficient global optimization (EGO) with neural network (NN)-based prediction and uncertainty (EGONN) algorithm are proposed. These strategies utilize Bayesian optimization and multiarmed bandit optimization to tune HPs during the sequential sampling process either every iteration (HPO-1itr) or every five iterations (HPO-5itr). Through experiments using the three-dimensional Hartmann function and evaluating both full and partial sets of HPs, adaptive HPOs are compared to traditional static HPO (HPO-static) that keep HPs constant. The results reveal that adaptive HPO strategies outperform HPOstatic, and the frequency of tuning and number of tuning HPs impact both the optimization accuracy and computational efficiency. Specifically, adaptive HPOs demonstrate rapid convergence rates (HPO-1itr at 28 iterations, HPO-5itr at 26 for full HPs; HPO-1itr at 13, HPO-5itr at 28 iterations for selected HPs), while HPO-static fails to approximate the minimum within the allocated 45 iterations for both scenarios. Mainly, HPO-5itr is the most balanced approach, found to require 21% of the time taken by HPO-1itr for tuning full HPs and 29% for tuning a subset of HPs. This work demonstrates the importance of adaptive HPO and sets the stage for future research.
Autorzy
Informacje dodatkowe
- DOI
- Cyfrowy identyfikator dokumentu elektronicznego link otwiera się w nowej karcie 10.1007/978-3-031-63775-9_6
- Kategoria
- Aktywność konferencyjna
- Typ
- publikacja w wydawnictwie zbiorowym recenzowanym (także w materiałach konferencyjnych)
- Język
- angielski
- Rok wydania
- 2024