Via Dan Katz, I learned about a recent problem (warning: JSTOR) from the American Mathematical Monthly (a publication of the Mathematics Association of America, making shaded icosahedrons look cool for almost 100 years):
What to Expect in a Game of Memory
Author(s): Daniel J. Velleman, Gregory S. Warrington
The American Mathematical Monthly, Vol. 120, No. 9 (November), pp. 787-805
The game of memory is played with a deck of pairs of cards. The cards in each pair are identical. The deck is shuffled and the cards laid face down. A move consists of flipping over first one card and then another. The cards are removed from play if they match. Otherwise, they are flipped back over and the next move commences. A game ends when all pairs have been matched. We determine that, when the game is played optimally, as
• The expected number of moves is .
• The expected number of times two matching cards are unwittingly flipped over is .
• The expected number of flips until two matching cards have been seen is .
This is not a competitive game of memory, but the singe player version. It’s a kind of explore-exploit tradeoff with a simple structure — if you know how to exploit, do it. Note that one could do moves by flipping every card over once (there are cards) to learn all of their identities and then removing all of the pairs one by one. The better strategy is
- Remove any known pair.
- If no known pair is known, flip a random unknown card and match it if you can.
- If the first card is not matchable, flip another random unknown card to learn its value (and remove the pair if it matches.
This strategy exploits optimally when it can and explores optimally when it can’t. The second bullet point in the abstract is the gain from getting lucky, i.e. two randomly drawn cards matching.
The paper is an interesting read, but the arguments are all combinatorial. Since the argument is a limiting one as , I wonder if there is a more “probabilistic” argument (this is perhaps a bit fuzzy) for the results.