A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Tensor-Based CUDA Optimization for ANN Inferencing Using Parallel Acceleration on Embedded GPU
[chapter]
2020
IFIP Advances in Information and Communication Technology
With image processing, robots acquired visual perception skills; enabling them to become autonomous. Since the emergence of Artificial Intelligence (AI), sophisticated tasks such as object identification have become possible through inferencing Artificial Neural Networks (ANN). Be that as it may, Autonomous Mobile Robots (AMR) are Embedded Systems (ESs) with limited on-board resources. Thus, efficient techniques in ANN inferencing are required for real-time performance. This paper presents the
doi:10.1007/978-3-030-49161-1_25
fatcat:kijobfe7j5bv7cca57rcluqgd4