Online optimization for low-latency computational caching in Fog networks
2017 IEEE Fog World Congress (FWC)
Enabling effective computation for emerging applications such as augmented reality or virtual reality via fog computing requires processing data with low latency. In this paper, a novel computational caching framework is proposed to minimize fog latency by storing and reusing intermediate computation results (IRs). Using this proposed paradigm, a fog node can store IRs from previous computations and can also download IRs from neighboring nodes at the expense of additional transmission latency.
... nsmission latency. However, due to the unpredictable arrival of the future computational operations and the limited memory size of the fog node, it is challenging to properly maintain the set of stored IRs. Thus, under uncertainty of future computation, the goal of the proposed framework is to minimize the sum of the transmission and computational latency by selecting the IRs to be downloaded and stored. To solve the problem, an online computational caching algorithm is developed to enable the fog node to schedule, download, and manage IRs compute arriving operations. Competitive analysis is used to derive the upper bound of the competitive ratio for the online algorithm. Simulation results show that the total latency can be reduced up to 26.8% by leveraging the computational caching method when compared to the case without computational caching. 1 Note that computational caching here is different from the original notion of computational caching used in  in which computer networks cache the act of computation, i.e., they store the trajectories that are encountered during the execution of a software's instructions, and apply it in new contexts.