Learning-rate annealing methods for deep neural networks

Research output: Contribution to journalJournal articleResearchpeer-review

Documents

  • Kensuke Nakamura
  • Bilel Derbel
  • Kyoung Jae Won
  • Byung Woo Hong

Deep neural networks (DNNs) have achieved great success in the last decades. DNN is optimized using the stochastic gradient descent (SGD) with learning rate annealing that overtakes the adaptive methods in many tasks. However, there is no common choice regarding the scheduled-annealing for SGD. This paper aims to present empirical analysis of learning rate annealing based on the experimental results using the major data-sets on the image classification that is one of the key applications of the DNNs. Our experiment involves recent deep neural network models in combination with a variety of learning rate annealing methods. We also propose an annealing combining the sigmoid function with warmup that is shown to overtake both the adaptive methods and the other existing schedules in accuracy in most cases with DNNs.

Original languageEnglish
Article number2029
JournalElectronics (Switzerland)
Volume10
Issue number16
Number of pages12
DOIs
Publication statusPublished - 2021

Bibliographical note

Publisher Copyright:
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.

    Research areas

  • Image classification, Learning rate annealing, Stochastic gradient descent

ID: 279140643