Hierarchical Scheduling Mechanisms in Multi-Level Fog Computing

Maycon Peixoto, Thiago Genez, Luiz Fernando Bittencourt
2021 IEEE Transactions on Services Computing  
Delivering cloud-like computing facilities at the network edge provides computing services with ultra-low-latency access, yielding highly responsive computing services to application requests. The concept of fog computing has emerged as a computing paradigm that adds layers of computing nodes between the edge and the cloud, also known as micro data centers, cloudlets, or fog nodes. Based on this premise, this paper proposes a component-based service scheduler in a cloud-fog computing
more » ... ure comprising several layers of fog nodes between the edge and the cloud. The proposed scheduler aims to satisfy the application's latency requirements by deciding which services components should be moved upwards in the fog-cloud hierarchy to alleviate computing workloads at the network edge. One communication-aware policy is introduced for resource allocation to enforce resource access prioritization among applications. We evaluate the proposal using the well-known iFogSim simulator. Results suggest that the proposed component-based scheduling algorithm can reduce average delays for application services with stricter latency requirements while still reducing the total network usage when applications exchange data between the components. Results have shown that our policy was able to, on average, reduce the overload impact on the network usage by approximately 11% compared to the best allocation policy in the literature while maintaining acceptable delays for latency-sensitive applications. Manuscript received on MM DD, YYYY; revised MM XX, XXXX. seconds of delay without experiencing a quality of service degradation [6] . However, applications with complex video and audio processing, such as online gaming [7] and other interactive services, a delay of a few tens of milliseconds can expose the application's performance to severe quality decay, making it unusable for activities that require prompt real-time responses because crucial actions can be lagged or frozen [8] . Indeed, the use of traditional cloud computing methodology for edge devices is undoubtedly a poor strategy to put latency-sensitive applications into the offloading practice [9] . Clearly, the Achilles' heel here is in terms of the network latency on the end-to-end communication channel between the IoT application (situated at the network edge) and the cloud data centers (confined at the network core) [10] . The fog computing concept emerged recently to address this issue. It attempts to mitigate the relatively high latency from using traditional cloud computing resources to perform the offloading procedure for delay-critical services [7], [11] . It introduces cloud-like computing services very close to end-devices, in an infrastructure that places small data centers (also called as cloudlets [10] or fog nodes [12] ) in the network between the edge and the core. A generic model of a fog computing network considers the deployment of several layers of fog nodes from the edge to the core, composing a hierarchy of computing nodes [8] (Figure 1 ). The higher a fog node is localized in the hierarchy, the larger its computing capacity is, since it should cover a broader set of users downwards the hierarchy. Analogously, the lower a fog node is established in the hierarchy, the closer to the edge it will be situated, thus presenting lower communication delays to edge devices. As shown in [1]-[3], [5], [7], [8], [12], fog computing has indeed great potential for delay-critical applications in performing offloading without This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information:
doi:10.1109/tsc.2021.3079110 fatcat:wik7mobx4ba5po4jmmr6kqlm7q