A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Gradient amplification: An efficient way to train deep neural networks
2020
Big Data Mining and Analytics
Improving performance of deep learning models and reducing their training times are ongoing challenges in deep neural networks. There are several approaches proposed to address these challenges, one of which is to increase the depth of the neural networks. Such deeper networks not only increase training times, but also suffer from vanishing gradients problem while training. In this work, we propose gradient amplification approach for training deep learning models to prevent vanishing gradients
doi:10.26599/bdma.2020.9020004
fatcat:jayu5jw43bcvpnvmyk2bjivpeq