Orchestrating the Development Lifecycle of Machine Learning-Based IoT Applications: A Taxonomy and Survey

Bin Qian, Jie Su, Zhenyu Wen, Devki Nandan Jha, Yinhao Li, Yu Guan, Deepak Puthal, Philip James, Renyu Yang, Albert Y. Zomaya, Omer Rana, Lizhe Wang (+2 others)
2020 ACM Computing Surveys  
Machine Learning (ML) and Internet of Things (IoT) are complementary advances: ML techniques unlock the potential of IoT with intelligence, and IoT applications increasingly feed data collected by sensors into ML models, thereby employing results to improve their business processes and services. Hence, orchestrating ML pipelines that encompass model training and implication involved in the holistic development lifecycle of an IoT application often leads to complex system integration. This paper
more » ... provides a comprehensive and systematic survey of the development lifecycle of ML-based IoT applications. We outline the core roadmap and taxonomy, and subsequently assess and compare existing standard techniques used at individual stages. Additional Key Words and Phrases: IoT, Machine learning, Deep learning, Orchestration INTRODUCTION Rapid development of hardware, software and communication technologies boosts the speed of connection of the physical world to the Internet via Internet of Things (IoT). A report 1 shows that about 75.44 billion IoT devices will be connected to the Internet by 2025. These devices generate a massive amount of data with various modalities. Processing and analyzing such big data is essential for developing smart IoT applications. Machine Learning (ML) plays a vital role in data intelligence which aims to understand and explore the real world. ML + IoT type applications thus are experiencing explosive growth. However, there are unfilled gaps between current solutions and the demands of orchestrating the development lifecycle of ML-based IoT applications. Existing orchestration frameworks for example Ubuntu Juju, Puppet and Chef are flexible in providing solutions for deploying and running applications over public or private clouds. These frameworks, however, neglect the heterogeneity of IoT environments that encompasses various hardwares, communication protocols and operating systems. More importantly, none of them are able to completely orchestrate a holistic development lifecycle of ML-based IoT applications. The development lifecycle must cover the following factors: 1) how the target application is specified and developed, 2) where the target application is deployed, (3) what kind of information the target application is being audited. Application specification defines the requirements including the ML tasks, performance, accuracy and execution workflow. Based on the specification and the available computing resources, the ML models are developed to meet the specified requirements while optimizing the training processes in terms of the cost of time and computing resources. Next, the model deployment considers the difficulty of * Zhenyu is the corresponding author Fig. 3. A taxonomy for orchestrating ML-based IoT application development lifecycle simulated environment. The following subsections will discuss the the pipeline in detail. 2 https://github.com/Dash-Industry-Forum/dash.js Model Selection Model selection aims to find the optimal ML model to perform a user's specified tasks, whilst adapting to the complexity of IoT environments. In this section, we first discuss the model selection from three main categories i.e., TML, DL and RL, followed by a survey of well-known models (or algorithms) in each category and their corresponding criteria for model selection. 2.1.1 TML vs. DL vs. RL. In this work we roughly divide the ML approaches/concepts into TML, DL and RL. Compared with the most popular DL , TML is relatively lightweight. It is a set of algorithms that directly transform the input data (to output), according to certain criteria. For supervised cases when a class label is available for training, TML aims to map the input data to the labels by optimising a model, which can be used to infer unseen data at the test stage. However, since the relationship between raw data and label might be highly non-linear, feature engineering-a heuristic trial-and-error process -is normally required to construct the appropriate input feature. The TML model is relatively simple, the interpretability (e.g., the relationship between the engineered features and the labels) tends to be high. DL has become popular in recent years. Consisting of multiple layers, DL is powerful for modeling complex non-linear relationships (between the input and output) and thus does not require the aforementioned heuristic (and expensive) feature engineering process, making it a popular modelling approach in many fields such as computer vision and natural language processing. Compared with TML, DL models tend to have more parameters (to be estimated) and generally they require more data for reliable representation learning. However, it is crucial to guarantee the data quality and a recent empirical study [247] suggested the increasing number of noisy/less-representative training samples may harm DL's performance, making it less generalizable to unseen test data. Moreover, DL's multilayer structures make it difficult to interpret the complex relationship between input (i.e., raw features) and output. However, more and more visualisation techniques (e.g., attention map [385]) were used, which play an important role in understanding DL's decision-making process. RL has become increasingly popular due to its success in addressing challenging sequential decision-making problems [324] . Some of these achievements are based on the combination of DL and RL, i.e., Deep Reinforcement Learning. It has shown its considerable performance in natural language processing [197, 365], computer vision [11, 55, 278, 323, 376], robotics [268] and IoT systems [221, 222, 392] and related applications like video games [11], visual tracking [278, 323, 376], action prediction [55], robotic grasping [268], question answering [365], dialogue generation [197] , etc. In RL, there is usually one or more agent(s) interacting with the outside environment, where optimal control policies are learnt through experience. Fig. 6 illustrates the iterative interaction circle, where the agent starts without knowing anything about environment or task. Each time the agent takes action based on the environment states, and it receives a reward from the environment. RL optimises this process such that it learns to make decisions with higher rewards received. Agent Environment Action Obser vation Rewar d Fig. 6. Reinforcement Learning Paradigm Discussion. In IoT environments, a variety of problems can be modelled by using the aforementioned three approaches. The applications range from system and networking [222] [221], smart city [392] [198], to smart grid [358] [285], etc. To begin with modeling, it is essential for users to choose a suitable learning concept at the first stage. The main selection criteria can be divided into two categories: Function-based selection and Power Consumption-based selection. Function-based selection aims to choose an appropriate concept based on their functional difference. For example, RL benefits from its iterative environment ↔ agent interaction property, and can be applied to various applications which need interaction with environment or system such as smart temperature control systems, or recommendation systems (with cold start problem). On the other hand, TML algorithms are more suitable for modelling structured data (with high-level semantic attributes), especially when interpretability is required. DL models are typically used to model complex unstructured data, e.g., images, audios, time-series data, etc. and are an ideal choice especially with high amount of training data and low requirement on interpretability. Power Consumption-based selection aims to choose an appropriate model given constraints in computational power or latency. In contrast to TML, the powerful RL/DL models are normally computationally expensive with high overhead. Recently, model compression techniques were developed, which may provide a relatively efficient solution for using RL/DL models for some IoT applications. However, on some mobile platforms with very limited hardware resources (e.g., power, memory, storage), it is still challenging to employ compressed RL/DL models, especially when there are some performance requirements (e.g., accuracy, or real-time inference) [59] . On the other hand, lightweight TML may be more efficient, yet reasonable accuracy can only be achieved with appropriate features (e.g., high level attributes derived from the time-consuming feature engineering).
doi:10.1145/3398020 fatcat:zzgfcjxjxbhnhf53dmlo63rs3i