Repozytorium publikacji - Politechnika Gdańska

Ustawienia strony

english
Repozytorium publikacji
Politechniki Gdańskiej

Treść strony

Limitation of Floating-Point Precision for Resource Constrained Neural Network Training

Insufficient availability of computational power and runtime memory is a major concern when it comes to experiments in the field of artificial intelligence. One of the promising solutions for this problem is an optimization of internal neural network’s calculations and its parameters’ representation. This work focuses on the mentioned issue by the application of neural network training with limited precision. Based on this research, the author proposes a new method of precision limitation for neural network training leveraging a custom, constrained floating-point representation with additional rounding mechanism. Its application allows to limit the resources required during neural network training thanks to the reduction of computational complexity and memory usage. The work shows that the proposed procedure allows to train commonly used benchmark networks such as LeNet, AlexNet and ResNet without significant accuracy degradation while using only 8-bit custom floating-point variables. It has also been proven that the proposed method of precision limitation does not negatively affect the network’s convergence, therefore, it is not required to extend the training by increasing the number of costly training epochs.

Autorzy

Informacje dodatkowe

Kategoria
Doktoraty, rozprawy habilitacyjne, nostryfikacje
Typ
praca doktorska pracowników zatrudnionych w PG oraz studentów studium doktoranckiego
Język
angielski
Rok wydania
2024

Źródło danych: MOSTWiedzy.pl - publikacja "Limitation of Floating-Point Precision for Resource Constrained Neural Network Training" link otwiera się w nowej karcie

Portal MOST Wiedzy link otwiera się w nowej karcie