A theory of goal-oriented communication

Oded Goldreich, Brendan Juba, Madhu Sudan
2011 Proceedings of the 30th annual ACM SIGACT-SIGOPS symposium on Principles of distributed computing - PODC '11  
We put forward a general theory of goal-oriented communication, where communication is not an end in itself, but rather a means to achieving some goals of the communicating parties. The goals can vary from setting to setting, and we provide a general framework for describing any such goal. In this context, "reliable communication" means overcoming the (potential) initial misunderstanding between parties towards achieving a given goal. We identify a main concept, which we call sensing, that
more » ... res the party's ability to check whether progress is made towards achieving the goal. We then show that if sensing is available, then the gap between a priori mutual understanding and lack of it can be bridged. For example, if providing the parties with an adequate interpreter allows them each to achieve their (possibly different) goals, then they can achieve their goals also without such an interpreter (although they may misunderstand each other and err at the beginning). Or, if each server (in a predetermined class of servers) can help some user (who understands the server) achieve its goal, then there exists a user strategy that achieves the goal no matter with which server it communicates. * An early version of this work has appeared as an ECCC report [10]. Bibliography 40 Appendix: On the measurability of various sets of executions 42 1 Specifically, Shannon [15] asserts "Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem." 2 In the printing example above, the goal of the computer is to ensure the image is printed properly; while in the computational example, the goal of the weak computer is to ensure that the computational task is executed correctly. Related work in AI. It is not surprising that a model of goal-oriented, computationally limited agents has also been considered in AI. In particular, the Asymptotic Bounded Optimal Agents, introduced by Russell and Subramanian [13], bear some similarity to the universal communicators we consider here. The similarity to our work is merely in the attempt to capture the notion of a goal and in the related definitions of "optimal" achievement of goals, while the crucial difference is that they only consider a single player: in their work the goal is achieved by a user (called an agent) that acts directly on the environment and obtains no help from a server (with whom it may need to communicate, while establishing an adequate level of mutual understanding) and so no issues analogous to incompatibilities with the server ever arise. Indeed, the question of the meaning of communication (i.e., understanding and misunderstanding) does not arise in their studies. 7 Our universal communicators are also similar to the universal agents considered by Hutter [8] . Like Russell and Subramanian, Hutter considers a single agent that interacts with the environment, and so there is no parallel to our interest in the communication with the server. In addition, Hutter's results are obtained in a control-theoretic, reinforcement learning setting, that is, a model in which the environment is assumed to provide the value of the agent's actions explicitly as feedback. Although we sometimes consider such settings, in general we assume that the user needs to decide for itself whether or not communication is successful. Access to a helpful server does not suffice -it is only a necessary requirement: we (as users) need to be able to effectively communicate with this server, which means communicating in a way that the server understands what we say and/or we understand the server's answers. A key point here is that the user is only guaranteed access to some helpful server, whereas the class of helpful server contains a large variety of servers, which use different communication languages (or formats or protocols). Not knowing a priori with which server it communicates, the user has to cope with the communication problem that is at the core of the current work: how to conduct a meaningful communication with alien (to it) entities. Universal user strategies. A strategy is called universal with respect to a given goal if it can overcome the said communication problem with respect to that goal. That is, this strategy achieves the goal when communicating with an arbitrary helpful server. In other words, if some
doi:10.1145/1993806.1993863 dblp:conf/podc/GoldreichJS11 fatcat:lmouwez6k5ehtpxiwxjcutlx3q