Publications Repository - Gdańsk University of Technology

Page settings

polski
Publications Repository
Gdańsk University of Technology

Treść strony

Limitation of Floating-Point Precision for Resource Constrained Neural Network Training

Insufficient availability of computational power and runtime memory is a major concern when it comes to experiments in the field of artificial intelligence. One of the promising solutions for this problem is an optimization of internal neural network’s calculations and its parameters’ representation. This work focuses on the mentioned issue by the application of neural network training with limited precision. Based on this research, the author proposes a new method of precision limitation for neural network training leveraging a custom, constrained floating-point representation with additional rounding mechanism. Its application allows to limit the resources required during neural network training thanks to the reduction of computational complexity and memory usage. The work shows that the proposed procedure allows to train commonly used benchmark networks such as LeNet, AlexNet and ResNet without significant accuracy degradation while using only 8-bit custom floating-point variables. It has also been proven that the proposed method of precision limitation does not negatively affect the network’s convergence, therefore, it is not required to extend the training by increasing the number of costly training epochs.

Authors

Additional information

Category
Doktoraty, rozprawy habilitacyjne, nostryfikacje
Type
praca doktorska pracowników zatrudnionych w PG oraz studentów studium doktoranckiego
Language
angielski
Publication year
2024

Source: MOSTWiedzy.pl - publication "Limitation of Floating-Point Precision for Resource Constrained Neural Network Training" link open in new tab

Portal MOST Wiedzy link open in new tab