Kolmogorov–Loveland randomness and stochasticity
Annals of Pure and Applied Logic
An infinite binary sequence X is Kolmogorov-Loveland (or KL) random if there is no computable non-monotonic betting strategy that succeeds on X in the sense of having an unbounded gain in the limit while betting successively on bits of X . A sequence X is KL-stochastic if there is no computable non-monotonic selection rule that selects from X an infinite, biased sequence. One of the major open problems in the field of effective randomness is whether Martin-Löf randomness is the same as
... ness. Our first main result states that KL-random sequences are close to Martin-Löf random sequences in so far as every KL-random sequence has arbitrarily dense subsequences that are Martin-Löf random. A key lemma in the proof of this result is that for every effective split of a KL-random sequence at least one of the halves is Martin-Löf random. However, this splitting property does not characterize KL-randomness; we construct a sequence that is not even computably random such that every effective split yields two subsequences that are 2-random. Furthermore, we show for any KL-random sequence A that is computable in the halting problem that, first, for any effective split of A both halves are Martin-Löf random and, second, for any computable, nondecreasing, and unbounded function g and almost all n, the prefix of A of length n has prefix-free Kolmogorov complexity at least n − g(n). Again, the latter property does not characterize KL-randomness, even when restricted to left-r.e. sequences; we construct a left-r.e. sequence that has this property but is not KLstochastic, in fact, is not even Mises-Wald-Church stochastic. Turning our attention to KL-stochasticity, we construct a non-empty 0 1 class of KLstochastic sequences that are not weakly 1-random; by the usual basis theorems we obtain such sequences that in addition are left-r.e., are low, or are of hyperimmune-free degree. Our second main result asserts that every KL-stochastic sequence has effective dimension 1, or equivalently, a sequence cannot be KL-stochastic if it has infinitely many prefixes that can be compressed by a factor of α < 1. This improves on a result by Muchnik, who has shown that were they to exist, such compressible prefixes could not be found effectively. The major criticism brought forward against the notion of Martin-Löf randomness is that, while it captures almost all important probabilistic laws, it is not completely intuitive, since it is not characterized by computable martingales but by recursively enumerable ones (or by an equivalent r.e. test notion). This point was issued first by Schnorr (26; 27) , who asserted that Martin-Löf randomness was too strong to be regarded as an effective notion of randomness. He proposed two alternatives, one defined via coverings with measures which are computable real numbers (not merely left-r.e.), leading to the concept today known as Schnorr randomness (27) . The other concept is based on the unpredictability paradigm; it demands that no computable betting strategy should win against a random sequence. This notion is commonly referred to as computable randomness (27) . If one is interested in obtaining stronger notions of randomness, closer to Martin-Löf randomness, without abandoning Schnorr's paradigm, one might stay with computable betting strategies and think of more general ways those strategies could be allowed to bet. One possibility is to remove the requirement that the betting strategy must bet on a given sequence in an order that is monotonic on the prefixes of that sequence, that is, the strategy itself determines which place of the sequence it wants to bet against next. The resulting concept of non-monotonic betting strategies is a generalization of the concept of monotonic betting strategies. An infinite binary sequence against which no computable non-monotonic betting strategy succeeds is called Kolmogorov-Loveland random, or KL-random, for short. The concept is named after Kolmogorov (9) and Loveland (14) , who studied non-monotonic selection rules to define accordant stochasticity concepts, which we will describe later. The concept of KL-randomness is robust in so far as it remains the same no matter whether one defines it in terms of computable or partial computable non-monotonic betting strategies (18); in terms of the latter, the concept has been introduced by Muchnik, Semenov, and Uspensky (20) in 1998. They showed that Martin-Löf randomness implies KL-randomness, but it is not known whether the two concepts are different. This question was raised by Muchnik, Semenov, and Uspensky (20) and by Ambos-Spies and Kučera (1). It is still a major open problem in the area. A proof that both concepts are the same would give a striking argument against Schnorr's criticism of Martin-Löf randomness. Most researchers conjecture that the notions are different. However, a result of Muchnik (20) indicates that KL-randomness is rather close to Martin-Löf randomness. Recall that it is possible to characterize Martin-Löf randomness as incompressibility with respect to prefix-free Kolmogorov complexity K: A sequence A is Martin-Löf random if and only if there is a constant c such that for all n the prefix-free Kolmogorov complexity of the length n prefix A n of A is at least n − c. It follows that a sequence A cannot be Martin-Löf random if there is a function h such that On the other hand, by the result of Muchnik (20) a sequence A cannot be KL-random if (1) holds for a computable function h. So, the difference between Martin-Löf randomness and KL-randomness appears, from this viewpoint, rather small. Not being Martin-Löf random means that for any given constant bound there are infinitely many initial segments for which the compressibility exceeds this bound. If, moreover, we are able to detect such initial segments efficiently (by means of a computable function), then the sequence cannot even be KL-random.