CMSC 471 -- Mastermind Project Description -- Fall 2009

The Game of Mastermind

Mastermind is a classic guessing game. One player (the "chooser") chooses a "secret code" and the other player (the "guesser") must guess the code within a certain number of guesses.

In the standard version of the game, the code consists of four pegs ("code pegs"), each of which is one of six different colors (white, orange, yellow, red, blue, or green). For each guess, the guesser chooses four pegs of these same colors. The chooser then gives feedback on the guess, using smaller red and white pegs ("key pegs"). One red peg is given for each correct color that is also in the correct location; one white peg is given for each correct color that is in the wrong position.

The game can be made arbitrarily more difficult by increasing the number of code pegs and the number of colors. Your job is to write a Mastermind guesser that can play any size game.

Mastermind Strategy

There are a number of different strategies that can be applied to reduce the number of guesses required to guess the code. The Wikipedia page on Mastermind gives a simple six-guess strategy (which always finds a standard mastermind solution within six guesses), and a five-guess solution by Donald Knuth. The five-guess approach exhaustively enumerates all of the possible codes, eliminating those that are inconsistent with the answers (red/white pegs) received so far. It then chooses a guess with the highest "minimax" score -- in this case, the guess such that [the minimum number of possibilities that could be eliminated by any of the possible red/white answers] is maximized. This sounds a bit confusing but it relatively straightforward to implement.

Of course, this five-guess strategy isn't at all scalable: as the number of code pegs and the number of colors grows, the complexity of this approach increases dramatically. In fact, Stuckman and Zhang showed that the "Mastermind Satisfiability Problem" (choosing a set of guesses that taken together will determine the code unambiguously) is NP-complete (i.e., presumed to have a complexity that grows exponentially in the number of code pegs and colors).

The 1981 SIGART article by Rao gives a heuristic algorithm that seems unlikely to be optimal, but which would probably work for any size solution. (Whether it would scale efficiently is another question...)

Here are three simple strategies that aren't very scalable, but will always work:

  1. Exhaustively enumerate all guesses, and try guessing each possible code, in lexicographic order, without paying any attention to the red/white peg responses. This method will take no more than c^p guesses (where c is the number of colors and p is the number of code pegs). Not very good, but it's a starting point.
  2. Exhaustively enumerate all guesses, and try guessing each possible code, in lexicographic order, skipping any codes that are demonstrably inconsistent with any previous red/white peg responses. This will be at least somewhat better than strategy #1.
  3. Use your first c-1 guesses to guess "all reds," "all whites," etc. - one "monochrome" guess per color, for all but one of the colors. At that point, you'll know how many code pegs there are of each color (namely, the number of red pegs associated with each guess). You don't need to actually guess the last color, since that color must have however many red pegs weren't allocated to other colors. Once you have this information, you can start generating only combinations that are consistent with the known color distribution. This approach has lower complexity than the first two strategies, since it "narrows in" on the right colors more quickly, and does less checking against the red/white responses. (Rao's approach is somewhat similar in spirit to this strategy, but adds other constraints to make it more sophisticated.)

These might be good starting points when you begin your implementation, as baselines to make sure you can get something working, and against which you can compare the performance of a more sophisticated strategy.

Mastermind Challenges

So how can you solve this problem in the general case (any number of code pegs and colors)? Well, that is your first of three challenges: (1) implementing a general-purpose Mastermind algorithm. (This challenge will be based on a fixed-size game, of a size to be announced a few weeks before the tournament, once I have a sense of what's a reasonable size for most guessers to handle fairly well. As a rough guess, I would think this game might have 8-10 pegs and 8-10 colors.) Clearly, one wants to choose a guess at each step that will maximally reduce the search space (number of remaining legal guesses) at the next step. How exactly to do this is an open question.

Poking around on the web, you'll find any number of research papers and mathematical analyses of different variants of the Mastermind problem. I expect that some of you will look through this literature and discover some ideas that will inspire your implementation. You are absolutely welcome to use any such research (whether formally published or not), but as always, you must cite your sources properly!!

Other teams will probably use a more ad hoc approach, playing the game for a while and using your intuition to design heuristic strategies that seem to work. Others may take a mathematical approach, trying to analyze the set of remaining solutions (without exhaustively enumerating them) to choose a guess that can be shown mathematically to reduce the number of possible remaining solutions.

As the problem gets larger (more code pegs/colors), it will take any given algorithm longer to make each guess, and more guesses will be required. Therefore, the second challenge is to make your mastermind algorithms (2) as scalable as possible (in terms of CPU time and the number of guesses needed to guess the code).

But what if the player who is choosing the code is known to be biased in some way? What if you knew that the chooser never uses pegs of certain colors, never uses the same color more than once, always puts the same color in the first and last places, or prefers codes with fewer colors (but sometimes uses more colors just to throw you off the scent)? This leads us to your third and final challenge: (3) learning a chooser's strategy and using this knowledge to guide the guessing process.

For the learning challenge, I will design and implement several biased code choosers, each with a particular "pattern" of guesses that you will need to implement a learning approach to discover. Examples of possible biases might be: Never choose a code with a red or orange peg; always choose codes with exactly three colors; never use a color more than once; always choose codes that alternate colors; have a probabilistic preference for fewer colors (e.g., with probability .5, use exactly 1 color; with probability .25, use exactly 2 colors, etc.); always have the colors appear in a certain order (e.g., red is always before yellow). As you can see, there is a nearly unlimited space of possible biases, so you will have to think hard about your learning feature space and algorithm.

There will be a reasonably varied set of "example biased choosers" (where I will give you a Lisp implementation of the chooser and describe the strategy that it uses, along with a large sample of codes generated by the chooser).

There will also be several "test biased choosers" that will be used in the tournament. For these, you will receive only a large sample of generated codes. Most of these choosers will use strategies that are in some way similar to the example strategies; but two or three of them will use some completely novel strategy.

I will distribute the test choosers at least several weeks before the tournament so that you have some time to run your learner on them. The results of learning should not be used for you to try to figure out what the strategy is, but you can do your learning in advance (offline) and then produce a different parameterized guesser for each of the test choosers.

All teams should design a guessing strategy that is expected to do well on challenge #1. However, I expect that different teams will choose to focus more of their energy on either challenge #2 or #3. Of course, you should have some solution that is plausibly scalable (works for any size problem at least in theory), and some learning approach (even if it is quite simple and limited). Three-person teams, however, will be expected to have good designs for all three categories. Also, all of the project writeups (as discussed below) must address all three challenges and what approaches you designed to try to meet them.

Project Requirements and Grading

There are three graded components of the project: (1) your implemented system, (2) a group project report, and (3) your performance in the class tournament. Note that you will be primarily graded on the thoughtfulness and clarity of your design and presentation, and not primarily on your algorithm's performance. The reason for this is that it gives you the freedom to try a risky approach that is interesting from a design perspective but might not work very well. An approach that doesn't work very well, and is also naive, trivial, or not well motivated will receive a correspondingly trivial grade.

Implemented Lisp-based Mastermind Player (40%). Your implementation will be graded on correctness (i.e., did you correctly implement the solution that you described in your paper), as well as design (generality, clarity, and elegance) and readability (indentation, comments, modularity, etc.)

Project Report (50%). Each team must submit a project report describing your approach, your experience in designing and implementing the approach, and the performance of your system. I would expect these reports to be somewhere in the 5-10 page range, but there is no minimum or maximum if you include all of the required information. Your report should include the following:

  1. Survey of the background reading you did on the Mastermind game and strategies, with citations.
  2. A discussion of your approaches for Mastermind guessing and for learning the biased code choosing strategies, with explicit citation of ideas that you drew upon or borrowed directly from the literature.
  3. Some theoretical analysis (mathematically formal would be nice, but intuitive is OK too) that discusses the computational complexity of your algorithm, and the number of expected guesses (which could be based on the degree to which each guess is expected to reduce the size of the remaining solution space) in terms of the size of the problem.
  4. Experimental evaluation of your system with respect to the three challenges. This evaluation should include performance of your system and at least one baseline. (The baseline can be the generate-and-test algorithm that I've provided, but preferably will be something a bit more reasonable, such as my strategy #2 or #3, or Rao's strategy.)
    1. Performance on the fixed-size problem (CPU time and guesses for at least 10 random codes).
    2. Performance on the scalability challenge, as measured by CPU time and number of guesses for a series of problems of increasing complexity (number of code pegs and colors) -- the exact problem sizes are up to you.
    3. Performance on the learning challenge, using data from the biased code choosers that I will provide. You should include results on both the "example" biased choosers (i.e., the ones with known strategies) and the "test" choosers (i.e., the ones that will be used in the tournament). Results here should include CPU time and number of guesses, and may also include other evaluations of the learning itself (e.g., some objective or subjective measure of how well your learner learned the biased chooser's strategy).
    The experimental results should be presented clearly; you may wish to include tables of data, scatterplots, and/or statistics (e.g., means, standard deviations, confidence intervals).

Class Tournament (10%). The tournament performance grade will be based on whether your player successfully runs (i.e., you have a working system at the time of the dry run and final tournament -- about half of the credit), and on how well it actually performs in the tournament (i.e., your system beats the other approaches -- about half of the credit). (Note that the latter grade, about 5% of your total project grade, is the only part of your project grade that is directly tied to performance (run time and number of guesses). So it is important to have a player that plays well, but it's even more important to have a strong justification and clear design for your player.)

The class tournament will pit teams' guessers against each other to see who is the best guesser in each of the three "challenge categories." For the fixed-size game, the score (to be minimized) will be the average number of guesses across some number of random codes (e.g., 5). For the scalability challenge, I haven't yet decided whether to use a competitive ratio of some sort (e.g., some measure of relative performance on a large problem to performance on a smaller problem), or whether to impose a time or number-of-guesses limit and measure the maximum problem size that each team's guesser can handle within that time limit. (I'm open to proposals...) For the learnability challenge, the winners will be the teams whose guessers have the smallest number of guesses, on average, for each of the several biased code choosers.

Lisp Infrastructure

I have provided two Lisp files: mm.lisp contains code to set up and run a tournament. mm-solver.lisp contains code for a single naive guesser, and shows you how to run a tournament with that single guesser. This should give you the infrastructure you need to implement your own guesser that will work within the tournament. (Since there are still some undetermined issues about exactly how the tournament will run, this infrastructure code may change over the course of the semester, but the basic API should stay the same.)

Note that you will probably want to use Lisp's optimizing compiler to make your code run faster for the actual tournament. I will develop some naming conventions for files and players so that the tournament runs smoothly; details to be announced later. All tournament rounds will be run on the gl machines, so you should be certain to test your code there even if you develop it on a different machine.

I will likely ask each team to create their own package and place all of their code into that package, so that we avoid namespace conflicts across different teams' code. More details on how to do this will be provided later.

I have not yet written any utility functions to print tournament results, or do other useful things. Please feel free to send any code that you develop that might be generally useful for me; I will vet it against the academic integrity policy (no sharing of solutions across teams...) and post it into a code repository.

Mastermind Resources

Here are a few online resources that you may find useful.

"Mastermind", Wikipedia, http://en.wikipedia.org/wiki/Mastermind_%28board_game%29

"Investigations into the Master Mind Board Game," Toby Nelson, February 1999, http://www.tnelson.demon.co.uk/mastermind/

Jeff Stuckman and Guo-Qiang Zhang, "Mastermind is NP-Complete," arXiv:cs/0512049, http://arxiv.org/abs/cs.CC/0512049

T. Mahadeva Rao, "An algorithm to play the game of mastermind," SIGART Bulletin 82, October 1982, http://portal.acm.org/ft_gateway.cfm?id=1056607&type=pdf&coll=Portal&dl=GUIDE&CFID=55406954&CFTOKEN=38683075