A potential new Polymath project: intransitive dice

http://ift.tt/2qeU74d

A few days ago I received an email from Brian Conrey, who has a suggestion for a possible Polymath project. The problem he wants to see solved is a little different from the problems in most previous projects, in that it is not a well known and long-standing question of acknowledged importance. However, that is not an argument against trying it out, since it is still far from clear what kinds of problems are best suited to the polymathematical approach, and it would be good to get more data. And this problem has other qualities that could make it very suitable indeed. First of all, it is quite new — it arises from a paper published last year, though it appeared on arXiv in 2013 — so we do not yet have a clear idea of how difficult it is, which should give us hope that it may turn out to be doable. Secondly, and more importantly, it is a very attractive question.

Suppose you have a pair of dice D_1,D_2 with different numbers painted on their sides. Let us say that D_1 beats D_2 if, thinking of them as random variables, the probability that D_1>D_2 is greater than the probability that D_2>D_1. (Here, the rolls are of course independent, and each face on each die comes up with equal probability.) It is a famous fact in elementary probability that this relation is not transitive. That is, you can have three dice D_1,D_2,D_3 such that D_1 beats D_2, D_2 beats D_3, and D_3 beats D_1.

Brian Conrey, James Gabbard, Katie Grant, Andrew Liu and Kent E. Morrison became curious about this phenomenon and asked the kind of question that comes naturally to an experienced mathematician: to what extent is intransitivity “abnormal”? The way they made the question precise is also one that comes naturally to an experienced mathematician: they looked at n-sided dice for large n and asked about limiting probabilities. (To give another example where one might do something like this, suppose one asked “How hard is Sudoku?” Well, any Sudoku puzzle can be solved in constant time by brute force, but if one generalizes the question to arbitrarily large Sudoku boards, then one can prove that the puzzle is NP-hard to solve, which gives a genuine insight into the usual situation with a 9\times 9 board.)

Let us see how they formulate the question. The “usual” n-sided die can be thought of as a random variable that takes values in the set \{1,2,\dots,n\}, each with equal probability. A general n-sided die is one where different probability distributions on \mathbb N are allowed. There is some choice about which ones to go for, but Conrey et al go for the following natural conditions.

  1. For each integer m, the probability p_m that it occurs is a multiple of 1/n.
  2. If p_m>0, then 1\leq m\leq n.
  3. The expectation \sum_mp_m is the same as it is for the usual die — namely (n+1)/2.

Equivalently, an n-sided die is a multiset of size n with elements in \{1,2,\dots,n\} and sum n(n+1)/2. For example, (2,2,3,3,5,6) and (1,2,3,3,6,6) are six-sided dice.

If we have two n-sided dice A and B represented in this way as (a_1,a_2,\dots,a_n) and (b_1,b_2,\dots,b_n), then A beats D if the number of pairs (a_i,b_j) with a_i>b_j exceeds the number of pairs with a_i<b_j.

The question can be formulated a little over-precisely as follows.

Question. Let A, B and C be three n-sided dice chosen uniformly at random. What is the probability that A beats C if you are given that A beats B and B beats C?

I say “over-precisely” because there isn’t a serious hope of finding an exact formula for this conditional probability. However, it is certainly reasonable to ask about the limiting behaviour as n tends to infinity.

It’s important to be clear what “uniformly at random” means in the question above. The authors consider two n-sided dice to be the same if the probability distributions are the same, so in the sequence representation a random die is a random non-decreasing sequence of integers from \{1,2,\dots,n\} that add up to n(n+1)/2 — the important word there being “non-decreasing”. Another way of saying this is that, as indicated above, the distribution is uniform over multisets (with the usual notion of equality) rather than sequences.

What makes the question particularly nice is that there is strong evidence for what the answer ought to be, and the apparent answer is, at least initially, quite surprising. The authors make the following conjecture.

Conjecture. Let A, B and C be three n-sided dice chosen uniformly at random. Then the probability that A beats C if you are given that A beats B and B beats C tends to 1/2 as n tends to infinity.

This is saying that if you know that A beats B and that B beats C, you basically have no information about whether A beats C.

They back up this conjecture with some experimental evidence. When n=6, there turn out to be 4417 triples of dice (A,B,C) such that A beats B and B beats C. For 930 of these triples, A and C were tied, for 1756, A beat C, and for 1731, C beat A.

It seems obvious that as n tends to infinity, the probability that two random n-sided dice are tied tends to zero. Somewhat surprisingly, that is not known, and is also conjectured in the paper. It might make a good first target.

The reason these problems are hard is at least in part that the uniform distribution over non-decreasing sequences of length n with entries in \{1,\dots,n\} that add up to n(n+1)/2 is hard to understand. In the light of that, it is tempting to formulate the original question — just how abnormal is transitivity? — using a different, more tractable distribution. However, experimental evidence presented by the authors in their paper indicates that the problem is quite sensitive to the distribution one chooses, so it is not completely obvious that a good reformulation of this kind exists. But it might still be worth thinking about.

Assuming that the conjecture is true, I would imagine that the heuristic reason for its being true is that for large n, two random dice will typically be “close” in the sense that although one beats the other, it does not do so by very much, and therefore we do not get significant information about what it looks like just from knowing that it beats the other one.

That sounds a bit vague, so let me give an analogy. Suppose we choose random unit vectors u,v,w in \mathbb R^2 and are given the additional information that \langle u,v\rangle\geq 0 and \langle v,w\rangle\geq 0? This is a simple exercise, and, unless I’ve messed up, the answer is 3/4. That is, knowing that in some sense u is close to v and v is close to w makes it more likely that u is close to w.

But now let’s choose our random vectors from \mathbb R^n. The picture changes significantly. For fixed u, the concentration of measure phenomenon tells us that for almost all v the inner product \langle u,v\rangle is close to zero, so we can think of u as the North Pole and the unit sphere as being almost all contained in a thin strip around the equator. And if v happens to be just in the northern hemisphere — well, it could just as easily have landed in the southern hemisphere. After a change of basis, we can assume that u=e_1 and v is very close to e_2. So when we choose a third vector w, we are asking whether the sign of its second coordinate is correlated with the sign of its first. And the answer is no — or rather, yes but only very weakly.

One can pursue that thought and show that the graph where one joins u to v if \langle u,v\rangle\geq 0 is, for large n, quasirandom, which means, roughly speaking, that it has several equivalent properties that are shared by almost all random graphs. (For a more detailed description, Googling “quasirandom graphs” produces lots of hits.)

For the problem of Conrey et al, the combinatorial object being examined is not a graph but a tournament: that is, a complete graph with orientations on each of its edges. (The vertices are dice, and we draw an arrow from D_1 to D_2 if D_1 beats D_2. Strictly speaking this is not a tournament, because of ties, but I am assuming that ties are rare enough for this to make no significant difference to the discussion that follows.) It is natural to speculate that the main conjecture is a consequence of a much more general statement, namely that this tournament is quasirandom in some suitable sense. In their paper, the authors do indeed make this speculation (it appears there as Conjecture 4).

It turns out that there is a theory of quasirandom tournaments, due to Fan Chung and Ron Graham. Chung and Graham showed that a number of properties that a tournament can have are asymptotically equivalent. It is possible that one of the properties they identified could be of use in proving the conjecture described in the previous paragraph, which, in the light of the Chung-Graham paper, is exactly saying that the tournament is quasirandom. I had hoped that there might be an analogue for tournaments of the spectral characterization of quasirandom graphs (which says that a graph is quasirandom if its second largest eigenvalue is small), since that could give a significantly new angle on the problem, but there is no such characterization in Chung and Graham’s list of properties. Perhaps it is worth looking for something of this kind.

Here, once again, is a link to the paper where the conjectures about dice are set out, and more detail is given. If there is enough appetite for a Polymath project on this problem, I am happy to host it on this blog. All I mean by this is that I am happy for the posts and comments to appear here — at this stage I am not sure what level of involvement I would expect to have with the project itself, but I shall certainly follow the discussion to start with and I hope I’ll be able to make useful contributions.




from Gowers

Popular posts from this blog

Top 10 Education Websites