CMSC 471 -- Mastermind Project Description -- Fall 2011

Released 10/4/11, updated 11/25/11

The Game of Mastermind

Mastermind is a classic guessing game. One player (the "chooser") chooses a "secret code" and the other player (the "guesser") must guess the code within a certain number of guesses.

In the standard version of the game, the code consists of four pegs ("code pegs"), each of which is one of six different colors (white, orange, yellow, red, blue, or green). For each guess, the guesser chooses four pegs of these same colors. The chooser then gives feedback on the guess, using smaller red and white pegs ("key pegs"). One red peg is given for each correct color that is also in the correct location; one white peg is given for each correct color that is in the wrong position.

The game can be made arbitrarily more difficult by increasing the number of code pegs and the number of colors. Your job is to write a Mastermind guesser that can play any size game.

Mastermind Strategy

There are a number of different strategies that can be applied to reduce the number of guesses required to guess the code. The Wikipedia page on Mastermind gives a simple six-guess strategy (which always finds a standard mastermind solution within six guesses), and a five-guess solution by Donald Knuth. The five-guess approach exhaustively enumerates all of the possible codes, eliminating those that are inconsistent with the answers (red/white pegs) received so far. It then chooses a guess with the highest "minimax" score -- in this case, the guess such that [the minimum number of possibilities that could be eliminated by any of the possible red/white answers] is maximized. This sounds a bit confusing but it relatively straightforward to implement.

Of course, this five-guess strategy isn't at all scalable: as the number of code pegs and the number of colors grows, the complexity of this approach increases dramatically. In fact, Stuckman and Zhang showed that the "Mastermind Satisfiability Problem" (choosing a set of guesses that taken together will determine the code unambiguously) is NP-complete (i.e., presumed to have a complexity that grows exponentially in the number of code pegs and colors).

The 1981 SIGART article by Rao gives a heuristic algorithm that seems unlikely to be optimal, but which would probably work for any size solution. (Whether it would scale efficiently is another question...)

Here are three simple strategies that aren't very scalable, but will always work:

  1. Exhaustively enumerate all guesses, and try guessing each possible code, in lexicographic order, without paying any attention to the red/white peg responses. This method will take no more than c^p guesses (where c is the number of colors and p is the number of code pegs). Not very good, but it's a starting point.
  2. Exhaustively enumerate all guesses, and try guessing each possible code, in lexicographic order, skipping any codes that are demonstrably inconsistent with any previous red/white peg responses. This will be at least somewhat better than strategy #1.
  3. Use your first c-1 guesses to guess "all reds," "all whites," etc. - one "monochrome" guess per color, for all but one of the colors. At that point, you'll know how many code pegs there are of each color (namely, the number of red pegs associated with each guess). You don't need to actually guess the last color, since that color must have however many red pegs weren't allocated to other colors. Once you have this information, you can start generating only combinations that are consistent with the known color distribution. This approach has lower complexity than the first two strategies, since it "narrows in" on the right colors more quickly, and does less checking against the red/white responses. (Rao's approach is somewhat similar in spirit to this strategy, but adds other constraints to make it more sophisticated.)

These might be good starting points when you begin your implementation, as baselines to make sure you can get something working, and against which you can compare the performance of a more sophisticated strategy.

Mastermind Challenges

So how can you solve this problem in the general case (any number of code pegs and colors)? Well, that is your first of three challenges: (1) implementing a general-purpose Mastermind algorithm. (This challenge will be based on a fixed-size game, of 8 pegs and 8 colors. of a size to be announced a few weeks before the tournament, once I have a sense of what's a reasonable size for most guessers to handle fairly well. As a rough guess, I expect that this game might have 8-10 pegs and 8-10 colors.) Clearly, one wants to choose a guess at each step that will maximally reduce the search space (number of remaining legal guesses) at the next step. How exactly to do this is an open question.

Poking around on the web, you'll find any number of research papers and mathematical analyses of different variants of the Mastermind problem. I expect that some of you will look through this literature and discover some ideas that will inspire your implementation. You are absolutely welcome to use any such research (whether formally published or not), but as always, you must cite your sources properly!!

Other teams will probably use a more ad hoc approach, playing the game for a while and using your intuition to design heuristic strategies that seem to work. Others may take a mathematical approach, trying to analyze the set of remaining solutions (without exhaustively enumerating them) to choose a guess that can be shown mathematically to reduce the number of possible remaining solutions.

As the problem gets larger (more code pegs/colors), it will take any given algorithm longer to make each guess, and more guesses will be required. Therefore, the second challenge is to make your mastermind algorithms (2) as scalable as possible (in terms of CPU time and the number of guesses needed to guess the code within a maximum running time (TBA, but currently set to 1 second of CPU time in the tournament code. The unit of CPU time in clisp's (get-internal-run-time) function used for timing is a millisecond, so *max-time* is set to 1,000,000. You may wish to change this for your own testing purposes; if you set *max-time* to 0, the tournament code will not enforce any time limit, which may be useful as you are developing your algorithms. Note that the timeout is handled by a throw from the mm-score function back to the tournament code, so your code doesn't even need to know about it. Since clisp doesn't support multithreading, there isn't an easy way to interrupt guessers that are in an infinite loop, so we may have to do some manual timeouts in the actual tournament).

But what if the player who is choosing the code is known to be biased in some way? What if you knew that the chooser never uses pegs of certain colors, never uses the same color more than once, always puts the same color in the first and last places, or prefers codes with fewer colors (but sometimes uses more colors just to throw you off the scent)? This leads us to your third and final challenge: (3) learning a chooser's strategy and using this knowledge to guide the guessing process.

For the learning challenge, I will design and implement several biased code choosers, each with a particular "pattern" of guesses that you will need to implement a learning approach to discover. Examples of possible biases might be: Never choose a code with a red or orange peg; always choose codes with exactly three colors; never use a color more than once; always choose codes that alternate colors; have a probabilistic preference for fewer colors (e.g., with probability .5, use exactly 1 color; with probability .25, use exactly 2 colors, etc.); always have the colors appear in a certain order (e.g., red is always before yellow). As you can see, there is a nearly unlimited space of possible biases, so you will have to think hard about your learning feature space and algorithm.

There will be a reasonably varied set of "example biased choosers" (where I will give you a Lisp implementation of the chooser and describe the strategy that it uses, along with a large sample of codes generated by the chooser).

There will also be several "test biased choosers" that will be used in the tournament. For these, you will receive only a large sample of generated codes. Most of these choosers will use strategies that are in some way similar to the example strategies; but two or three of them will use some completely novel strategy.

I will distribute the test choosers at least several weeks before the tournament so that you have some time to run your learner on them. The results of learning should not be used for you to try to figure out what the strategy is, but you can do your learning in advance (offline) and then produce a different parameterized guesser for each of the test choosers.

All teams should design a guessing strategy that is expected to do well on challenge #1. I expect that different teams will choose to focus more of their energy on either challenge #2 or #3, but you must should have some solution that is plausibly scalable (works for any size problem at least in theory), and some learning approach (even if it is quite simple and limited). Three-person teams will be expected to have good designs for all three categories. Also, all of the project writeups (as discussed below) must address all three challenges and what approaches you designed to try to meet them.

More on the Learning Challenge

(sent to class mailing list 11/6/11)

For the learning challenge, your code should operate in two stages. In an offline stage (which can happen before the tournament begins, and before you submit your code), you should have a machine learning (classification) algorithm that learns the biased code generation rules. For each of the (three or four) biased rules, this ML algorithm should read a file containing positive and negative training instances, and generate some encoding of the learned mapping. Then your guesser should use this mapping to constrain or bias its guessing strategy (i.e., don't guess a code that this generator would never or rarely generate). You should save the encoding that you learned and submit it along with your guesser, so that in the tournament, your guesser will be able to use the appropriate bias (selecting from among the alternative biases using the generator argument to the solver).

I've posted four training files for the biases, along with the lisp code that I used to generate the training files. (You can generate more data if you want to, though you won't get the code for the "test biases" when it's time for the tournament.) The lisp code explains what each of the biases actually are (but again, at tournament time, you won't know! - though the tournament biases will be similar in complexity/nature to the four training biases).
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-biases.lisp
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-bias1.txt
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-bias2.txt
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-bias3.txt
http://www.cs.umbc.edu/courses/undergraduate/471/fall11/train-bias4.txt

Project Requirements and Grading

There are three graded components of the project: (1) your implemented system, (2) a group project report, and (3) your performance in the class tournament. Note that you will be primarily graded on the thoughtfulness and clarity of your design and presentation, and not primarily on your algorithm's performance. The reason for this is that it gives you the freedom to try a risky approach that is interesting from a design perspective but might not work very well. An approach that doesn't work very well, and is also naive, trivial, or not well motivated will receive a correspondingly trivial grade.

Implemented Lisp-based Mastermind Player (40%). Your implementation will be graded on correctness (i.e., did you correctly implement the solution that you described in your paper), as well as design (generality, clarity, and elegance) and readability (indentation, comments, modularity, etc.)

You must have a working Mastermind player by the date of the tournament dry run (Thursday, November 17). If you do not submit a working player into this tournament, you will receive a 5 point penalty on your grade for the final project. (Players that "almost work" but require some manual repair to get them working may receive a partial penalty.) Your final player must be submitted by the time of the second dry run (Thursday, December 8). You may resubmit your code after that time, but may not change any of the top-level function names, since we will already have set up the configuration file for the tournament. The in-class tournament will be held on the last day of class, Tuesday, December 13.

Note that we may have a "zeroth dry run" on some earlier date (before November 17) to test the tournament software itself. Participation in that dry run is entirely optional and will not affect your grade (but may give you some sense of how your solution performs compared to other teams' solutions).

Project Report (50%). Each team must submit a project report describing your approach, your experience in designing and implementing the approach, and the performance of your system. I would expect these reports to be somewhere in the 5-10 page range, but there is no minimum or maximum if you include all of the required information. Your report should include the following:

  1. Survey of the background reading you did on the Mastermind game and strategies, with citations.
  2. A discussion of your approaches for Mastermind guessing and for learning the biased code choosing strategies, with explicit citation of ideas that you drew upon or borrowed directly from the literature.
  3. Some theoretical analysis (mathematically formal would be nice, but intuitive is OK too) that discusses the computational complexity of your algorithm, and the number of expected guesses (which could be based on the degree to which each guess is expected to reduce the size of the remaining solution space) in terms of the size of the problem.
  4. Experimental evaluation of your system with respect to the three challenges. This evaluation should include performance of your system and at least one baseline. (The baseline can be the generate-and-test algorithm that I've provided, but preferably will be something a bit more reasonable, such as my strategy #2 or #3, or Rao's strategy.)
    1. Performance on the fixed-size problem (CPU time and number of guesses for at least 10 random codes).
    2. Performance on the scalability challenge, as measured by CPU time and number of guesses for a series of problems of increasing complexity (number of code pegs and colors) -- the exact problem sizes are up to you.
    3. Performance on the learning challenge, using data from the biased code choosers that I will provide. You should include results on both the "example" biased choosers (i.e., the ones with known strategies) and the "test" choosers (i.e., the ones that will be used in the tournament). Results here should include CPU time and number of guesses, and may also include other evaluations of the learning itself (e.g., some objective or subjective measure of how well your learner learned the biased chooser's strategy).
    The experimental results should be presented clearly; you may wish to include tables of data, scatterplots, and/or statistics (e.g., means, standard deviations, confidence intervals).

The final code (in hardcopy and submitted on gl) and project report (hardcopy only) are due Monday, December 19, at ITE 337, by 2:30 p.m. You may submit the hardcopies to Dr. desJardins's mailbox in ITE 325B or in person up to the deadline. NO LATE PROJECT REPORTS OR CODE SUBMISSIONS WILL BE ACCEPTED UNDER ANY CIRCUMSTANCES. You may redesign and resubmit your code after the in-class tournament if you wish; your grade for the implementation and project report will be based on your final submission. If you change your code after the tournament, however, you should include a summary of these changes in your report, so that we are aware whether you have fixed any problems in your code after the tournament.

Class Tournament (10%). The tournament performance grade will be based on whether your player successfully runs (i.e., you have a working system at the time of the dry run and final tournament -- about half of the credit), and on how well it actually performs in the tournament (i.e., your system beats the other approaches -- about half of the credit). (Note that the latter grade, about 5% of your total project grade, is the only part of your project grade that is directly tied to performance (run time and number of guesses). So it is important to have a player that plays well, but it's even more important to have a strong justification and clear design for your player.)

The class tournament will pit teams' guessers against each other to see who is the best guesser in each of the three "challenge categories." In all rounds, there will be a time limit (to be announced in advance, after the dry run; most likely corresponding to around 30 seconds in elapsed time, on the gl server). Within this time constraint, the winner will be the team that uses the fewest guesses, on average, to find the correct solution. I may schedule some "elimination rounds" offline, prior to the real-time in-class tournament, to select the "seeds" who will compete in the in-class tournament. (But perhaps we will not reveal the selected entrants until the day of the tournament, just to keep things lively.)

For the fixed-size game, the score will just be the average number of guesses across some number of random codes (probably 5, but TBD once we see how long a typical solution takes, how many teams there are, etc.)

For the scalability challenge, the size of the problems will increase after each round (in both number of pegs and number of colors -- note that there could be more pegs than colors, or more colors than pegs).

For the learnability challenge, the winners will be the teams whose guessers have the smallest number of guesses, on average, for each of the several biased code choosers.

Lisp Infrastructure

I have provided two Lisp files: mm.lisp (revised 11/24/11) contains code to set up and run a tournament. mm-solver.lisp contains code for a single naive guesser, and shows you how to run a tournament with that single guesser. This should give you the infrastructure you need to implement your own guesser that will work within the tournament. (Since there are still some undetermined issues about exactly how the tournament will run, this infrastructure code may change over the course of the semester, but the basic API should stay the same.)

Note that you will probably want to use Lisp's optimizing compiler to make your code run faster for the actual tournament. I will develop some naming conventions for files and players so that the tournament runs smoothly; details to be announced later. All tournament rounds will be run on the gl machines, so you should be certain to test your code there even if you develop it on a different machine.

Each team will need to create their own package and place all of their code into that package, so that we avoid namespace conflicts across different teams' code. You will need to let us know (by email to Dr. desJardins and Yasaman) what userid you have submitted your code under, the name of your main file, what package name you are using, and the name of your solver function for each of the three challenges. Every team's solver function must take exactly three arguments: the set of colors, the length of the code being guessed, and the generator function that was used to create the codes. You should use the same solver function for all challenges, but your solver function may call a different sub-function based on the number of pegs/colors and the generating function. In the final tournament, this generating function will either be #'mm::mm-gen-random (for challenges 1 and 2), or one of the four test biases (#'mm::test-bias1-pos, #'mm::test-bias2-pos, #'mm::test-bias3-pos, or #'mm::test-bias4-pos) for challenge 3. (Note that you may define a different solver function for each of the learning challenges; we'll let you know later on how to submit this information, and when you need to submit it by.) The mm-solver.lisp file gives you a template of what the load file should look like, and what the solver API should look like. Note that your guesser should invoke mm-score each time it wants to make a guess. mm-score takes three arguments: your guess, the true code (which should always be passed in as *code*), and the list of colors (for which you should always pass the color list that was sent to your guessing function).

You should submit your code on gl using the "submit" command with the "dryrun1", "dryrun2", or "tournament" projects, depending on the submission. You have one main file, which may load other files (in the same directory) as needed. To load any other files, you should use:

(load (make-pathname :directory (pathname-directory *load-pathname*) :name "FILE"))
where FILE is replaced by the name of the file you want to load. This will cause clisp to look for FILE.lisp in the same directory as your main load file.

Your main file should not load mm.lisp, since the tournament will load that shared code for everyone. You must not change any of the functions in mm.lisp. Your submission should not include any print statements. Do not submit a tarfile, a zip file, test data, or any files other than your lisp code.

I have not yet written any utility functions to print tournament results, or do other useful things. Please feel free to send any code that you develop that might be generally useful to me; I will vet it against the academic integrity policy (no sharing of solutions across teams...) and post it into a code repository.

Mastermind Resources

Here are a few online resources that you may find useful.

"Mastermind", Wikipedia, http://en.wikipedia.org/wiki/Mastermind_%28board_game%29

"Investigations into the Master Mind Board Game," Toby Nelson, February 1999, http://www.tnelson.demon.co.uk/mastermind/

Jeff Stuckman and Guo-Qiang Zhang, "Mastermind is NP-Complete," arXiv:cs/0512049, http://arxiv.org/abs/cs.CC/0512049

T. Mahadeva Rao, "An algorithm to play the game of mastermind," SIGART Bulletin 82, October 1982, http://portal.acm.org/ft_gateway.cfm?id=1056607&type=pdf&coll=Portal&dl=GUIDE&CFID=55406954&CFTOKEN=38683075