Click here for a list of teams.
Mastermind is a classic guessing game. One player (the "chooser") chooses a "secret code" and the other player (the "guesser") must guess the code within a certain number of guesses.
In the standard version of the game, the code consists of four pegs ("code pegs"), each of which is one of six different colors (white, orange, yellow, red, blue, or green). For each guess, the guesser chooses four pegs of these same colors. The chooser then gives feedback on the guess, using smaller red and white pegs ("key pegs"). One red peg is given for each correct color that is also in the correct location; one white peg is given for each correct color that is in the wrong position.
The game can be made arbitrarily more difficult by increasing the number of code pegs and the number of colors. Your job is to write a Mastermind guesser that can play any size game.
Of course, this five-guess strategy isn't at all scalable: as the number of code pegs and the number of colors grows, the complexity of this approach increases dramatically. In fact, Stuckman and Zhang showed that the "Mastermind Satisfiability Problem" (choosing a set of guesses that taken together will determine the code unambiguously) is NP-complete (i.e., presumed to have a complexity that grows exponentially in the number of code pegs and colors).
The 1981 SIGART article by Rao gives a heuristic algorithm that seems unlikely to be optimal, but which would probably work for any size solution. (Whether it would scale efficiently is another question...)
Here are three simple strategies that aren't very scalable, but will always work:
These might be good starting points when you begin your implementation, as baselines to make sure you can get something working, and against which you can compare the performance of a more sophisticated strategy.
So how can you solve this problem in the general case (any number of code pegs and colors)? Well, that is your first of three challenges: (1) implementing a general-purpose Mastermind algorithm. (This challenge will be based on a fixed-size game, of a size to be announced a few weeks before the tournament, once I have a sense of what's a reasonable size for most guessers to handle fairly well. As a rough guess, I would think this game might have 8-10 pegs and 8-10 colors.) Clearly, one wants to choose a guess at each step that will maximally reduce the search space (number of remaining legal guesses) at the next step. How exactly to do this is an open question.
Poking around on the web, you'll find any number of research papers and mathematical analyses of different variants of the Mastermind problem. I expect that some of you will look through this literature and discover some ideas that will inspire your implementation. You are absolutely welcome to use any such research (whether formally published or not), but as always, you must cite your sources properly!!
Other teams will probably use a more ad hoc approach, playing the game for a while and using your intuition to design heuristic strategies that seem to work. Others may take a mathematical approach, trying to analyze the set of remaining solutions (without exhaustively enumerating them) to choose a guess that can be shown mathematically to reduce the number of possible remaining solutions.
As the problem gets larger (more code pegs/colors), it will take any given algorithm longer to make each guess, and more guesses will be required. Therefore, the second challenge is to make your mastermind algorithms (2) as scalable as possible (in terms of CPU time and the number of guesses needed to guess the code).
But what if the player who is choosing the code is known to be biased in some way? What if you knew that the chooser never uses pegs of certain colors, never uses the same color more than once, always puts the same color in the first and last places, or prefers codes with fewer colors (but sometimes uses more colors just to throw you off the scent)? This leads us to your third and final challenge: (3) learning a chooser's strategy and using this knowledge to guide the guessing process.
For the learning challenge, I will design and implement several biased code choosers, each with a particular "pattern" of guesses that you will need to implement a learning approach to discover. Examples of possible biases might be: Never choose a code with a red or orange peg; always choose codes with exactly three colors; never use a color more than once; always choose codes that alternate colors; have a probabilistic preference for fewer colors (e.g., with probability .5, use exactly 1 color; with probability .25, use exactly 2 colors, etc.); always have the colors appear in a certain order (e.g., red is always before yellow). As you can see, there is a nearly unlimited space of possible biases, so you will have to think hard about your learning feature space and algorithm.
There will be a reasonably varied set of "example biased choosers" (where I will give you a Lisp implementation of the chooser and describe the strategy that it uses, along with a large sample of codes generated by the chooser).
There will also be several "test biased choosers" that will be used in the tournament. For these, you will receive only a large sample of generated codes. Most of these choosers will use strategies that are in some way similar to the example strategies; but two or three of them will use some completely novel strategy.
I will distribute the test choosers at least several weeks before the tournament so that you have some time to run your learner on them. The results of learning should not be used for you to try to figure out what the strategy is, but you can do your learning in advance (offline) and then produce a different parameterized guesser for each of the test choosers.
All teams should design a guessing strategy that is expected to do well on challenge #1. However, I expect that different teams will choose to focus more of their energy on either challenge #2 or #3. Of course, you should have some solution that is plausibly scalable (works for any size problem at least in theory), and some learning approach (even if it is quite simple and limited). Three-person teams, however, will be expected to have good designs for all three categories. Also, all of the project writeups (as discussed below) must address all three challenges and what approaches you designed to try to meet them.
There are three graded components of the project: (1) your implemented system, (2) a group project report, and (3) your performance in the class tournament. Note that you will be primarily graded on the thoughtfulness and clarity of your design and presentation, and not primarily on your algorithm's performance. The reason for this is that it gives you the freedom to try a risky approach that is interesting from a design perspective but might not work very well. An approach that doesn't work very well, and is also naive, trivial, or not well motivated will receive a correspondingly trivial grade.
Implemented Lisp-based Mastermind Player (40%). Your implementation will be graded on correctness (i.e., did you correctly implement the solution that you described in your paper), as well as design (generality, clarity, and elegance) and readability (indentation, comments, modularity, etc.)
Project Report (50%). Each team must submit a project report describing your approach, your experience in designing and implementing the approach, and the performance of your system. I would expect these reports to be somewhere in the 5-10 page range, but there is no minimum or maximum if you include all of the required information. Your report should include the following:
Class Tournament (10%). The tournament performance grade will be based on whether your player successfully runs (i.e., you have a working system at the time of the dry run and final tournament -- about half of the credit), and on how well it actually performs in the tournament (i.e., your system beats the other approaches -- about half of the credit). (Note that the latter grade, about 5% of your total project grade, is the only part of your project grade that is directly tied to performance (run time and number of guesses). So it is important to have a player that plays well, but it's even more important to have a strong justification and clear design for your player.)
The class tournament will pit teams' guessers against each other to see who is the best guesser in each of the three "challenge categories." For the fixed-size game, the score (to be minimized) will be the average number of guesses across some number of random codes (e.g., 5). For the scalability challenge, I haven't yet decided whether to use a competitive ratio of some sort (e.g., some measure of relative performance on a large problem to performance on a smaller problem), or whether to impose a time or number-of-guesses limit and measure the maximum problem size that each team's guesser can handle within that time limit. (I'm open to proposals...) For the learnability challenge, the winners will be the teams whose guessers have the smallest number of guesses, on average, for each of the several biased code choosers.
I have not yet written any utility functions to print tournament results, or do other useful things. Please feel free to send any code that you develop that might be generally useful for me; I will vet it against the academic integrity policy (no sharing of solutions across teams...) and post it into a code repository.
"Mastermind", Wikipedia, http://en.wikipedia.org/wiki/Mastermind_%28board_game%29
"Investigations into the Master Mind Board Game," Toby Nelson, February 1999, http://www.tnelson.demon.co.uk/mastermind/
Jeff Stuckman and Guo-Qiang Zhang, "Mastermind is NP-Complete," arXiv:cs/0512049, http://arxiv.org/abs/cs.CC/0512049
T. Mahadeva Rao, "An algorithm to play the game of mastermind," SIGART Bulletin 82, October 1982, http://portal.acm.org/ft_gateway.cfm?id=1056607&type=pdf&coll=Portal&dl=GUIDE&CFID=55406954&CFTOKEN=38683075