Machine Learning Application Benchmark for Satellite On-Board Data Processing

Max Ghiglione, Amir Raoofy, Gabriel Dax, Gianluca Furano, Richard Wiest, Carsten Trinitis, Martin Werner, Martin Schulz, Martin Langer
2021 Zenodo  
Machine Learning applications are finding their ways in demonstration missions like ESA's Φ-sat, which were enabled by the use of COTS solutions and the improvement of tools for ML deployment on radiation-tolerant processing units. The satellite industry has been looking at these developments with great interest. The challenge of implementing such ML applications lies mainly on three points: 1) There are limited processing capabilities on spacecraft hardware, meaning that algorithms need to be
more » ... ptimized for their embedded application. 2) This poses challenges also in terms of tools to be used in the development flow, as classical GPU inference is not possible, and the integration in the industry workflow is complex. 3) The available datasets in terms of openly accessibility and reusability for space missions are limited, as data is either proprietary or poorly labeled. To address these challenges, a benchmark for ML inference applications in space is proposed. Such a method would simplify the comparison of algorithms in early development phases, enabling engineers to define necessary processing power for the desired applications. Moreover, appropriate benchmarking suites will enable the investigation of the software tools, various custom reconfigurable IP designs, and COTS solutions for ML inference for on-board data processing. In the frame of the MLAB project, Airbus, TU Munich, and OroraTech are working on developing an ML inference benchmark based on the commercial MLPerf method. In this work, we specifically focus on the description of this benchmark as the main part of MLAB project, and discuss initial findings and directions with respect to the datasets and tools. The benchmark intends to cover diverse set of algorithms including feature extraction, object detection, classification, tracking, and change detection of different complexity. This ensures that various space use-cases and different computational complexities are represented in benchmarks. The benchmarking suite relies on [...]
doi:10.5281/zenodo.5520876 fatcat:uumnhdvibff6vgoo4vxipheoxu