Message Reduction in the LOCAL Model is a Free Lunch
Proceedings of the 2019 ACM Symposium on Principles of Distributed Computing - PODC '19
A new spanner construction algorithm is presented, working under the LOCAL model with unique edge IDs. Given an n-node communication graph, a spanner with a constant stretch and O(n 1+ε ) edges (for an arbitrarily small constant ε > 0) is constructed in a constant number of rounds sending O(n 1+ε ) messages whp. Consequently, we conclude that every t-round LOCAL algorithm can be transformed into an O(t)-round LOCAL algorithm that sends O(t · n 1+ε ) messages whp. This improves upon all previous
... s upon all previous message-reduction schemes for LOCAL algorithms that incur a log Ω(1) n blow-up of the round complexity. Message Reduction in the LOCAL Model Is a Free Lunch the distance between any two vertices is at most α times their distance in G. 1 More general spanners, called (α, β)-spanners, are also considered, where the spanner distance between any two nodes is at most α times their distance in G plus an additive β-term ( ). Sparse low stretch spanners are known to provide the means to save on message complexity in the LOCAL model [27,31] without a significant increase in the round complexity. This can be done via the following classic simulation technique: Given an n-node communication graph G = (V, E) and a LOCAL algorithm A whose run A(G) on G takes t rounds, (1) construct an α-spanner H = (V, S) of G; and (2) simulate each communication round of A(G) by α communication rounds in H so that a message sent over the edge (u, v) ∈ E under A(G) is now sent over a (u, v)-path of length at most α in H. The crux of this approach is that the simulating algorithm executed in stage (2) runs for αt rounds and sends at most 2αt · |S| messages. Therefore, if α and |S| are "small", then the simulating algorithm incurs "good" round and message bounds. In particular, the performance of the simulating algorithm does not depend on the number |E| of edges in the underlying graph G. What about the performance of the spanner construction in the "preprocessing" stage (1) though? A common thread among distributed spanner construction algorithms is that they all send Ω(|E|) messages when running on graph G = (V, E). Consequently, accounting for the messages sent during this preprocessing stage, the overall message complexity of the aforementioned simulation technique includes a seemingly inherent Ω(|E|) term. The following research question that lies at the heart of distributed message reduction schemes is therefore left open. 2 Alternatively, the algorithm can run under the rather common KT1 model variant  , where the nodes are associated with unique IDs and each node knows the ID of the other endpoint of each one of its incident edges; see the discussion in Section 1.2. 3 We say that an event occurs with high probability, abbreviated by whp, if the probability that it does not occur is at most n −c for an arbitrarily large constant c. 4 The asymptotic notationÕ(·) may hide log O(1) n factors.