A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is
2021 International Conference on Applied Electronics (AE)
Advancements in machine-learning algorithms made it necessary to explore fast algorithms for Floating Point operations, addition being most commonly used complex operation involving significant delay and power-consumption. Applications include high-performance computer vision, imaging and deep-learning functions accelerated using dedicated hardware accelerators. This paper proposes a 32-bit Floating Point Adder based on the 'Far-and-Close-Data-Path-Algorithm' with added optimizations to give adoi:10.23919/ae51540.2021.9542881 fatcat:gj5isopcpnfmtexsff2bmtjlii