A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
A survey on GAN acceleration using memory compression techniques
2021
Journal of Engineering and Applied Science (Cairo) (Online)
AbstractSince its invention, generative adversarial networks (GANs) have shown outstanding results in many applications. GANs are powerful, yet resource-hungry deep learning models. The main difference between GANs and ordinary deep learning models is the nature of their output and training instability. For example, GANs output can be a whole image versus other models detecting objects or classifying images. Thus, the architecture and numeric precision of the network affect the quality and
doi:10.1186/s44147-021-00045-5
fatcat:hy3oxa4fvzavhophwiekph4rum