Beyond Bernoulli: Generating Random Outcomes that cannot be Distinguished from Nature

Cynthia Dwork, Michael P. Kim, Omer Reingold, Guy N. Rothblum, Gal Yona
2022 International Conference on Algorithmic Learning Theory  
Recently, Dwork et al. (STOC 2021) introduced Outcome Indistinguishability as a new desideratum for binary prediction tasks. Outcome Indistinguishability (OI) articulates the goals of prediction in the language of computational indistinguishability: a predictor is Outcome Indistinguishable if no computationally-bounded observer can distinguish Nature's outcomes from outcomes that are generated based on the predictions. In this sense, OI suggests a generative model for binary outcomes that
more » ... be refuted given the empirical evidence and computational resources at hand. In this work, we extend Outcome Indistinguishability beyond Bernoulli, to outcomes that live in a large discrete or continuous domain. While the idea of OI for non-binary outcomes is natural for many applications, defining OI in generality is not simply a syntactic exercise. We introduce and study multiple definitions of OIeach with its own semantics-for predictors that completely specify each individuals' outcome distributions, as well as predictors that only partially specify the outcome distributions through statistics, such as moments. With the definitions in place, we provide learning algorithms for producing OI generative outcome models for general random outcomes. Finally, we study the relation of Outcome Indistinguishability and Multicalibration of statistics (beyond the mean) and relate our findings to the recent work of Jung et al. (COLT 2021) on Moment Multicalibration. We find an equivalence between Outcome Indistinguishability and Multicalibration that is more subtle than in the binary case and sheds light on the techniques employed by Jung et al. to obtain Moment Multicalibration.
dblp:conf/alt/DworkKRRY22 fatcat:zqg2ga3v2bdxvpl7itwkmjq6na