Data di Pubblicazione:
2023
Abstract:
In the last decade, deep learning models competed for performance at the price of tremendous computational costs. Such a critical aspect recently attracted more attention for both the training and inference phases. The latter is obviously orders of magnitude lower than the training complexity, but on the other hand, it contributes many times, which impacts efficiency on edge or embedded devices. Inference can be made efficient through neural network pruning, which consists of parameters and neurons' removal from the model's topology while maintaining the model's accuracy. This results in reduced resource and energy requirements for the models. This paper describes two pruning procedures for lowering the operations required during the inference phase and a method to exploit the resulting sparsity. The same cannot be applied at training time: we show it is possible to borrow similar ideas to reduce the cost of gradient backpropagation by disabling the computation for selected neurons.
Tipologia CRIS:
04A-Conference paper in volume
Keywords:
Deep Learning; Efficiency; Pruning
Elenco autori:
Bragagnolo A.; Tartaglione E.; Dalmasso G.; Grangetto M.
Link alla scheda completa:
Link al Full Text:
Titolo del libro:
CEUR Workshop Proceedings
Pubblicato in: