Recall that there are three challenges in the Mastermind project:

  - Playing a solid game of Mastermind
  - Scaling up
  - Learning

In the past all teams coded to a fixed Lisp API and running the
tournament was easy because a single driver could be used for all of
the projects.  This semester many different programming languages were
used, so a new method for comparing performance is needed.  

Here's what we'll do.  Each team needs to do three things during the
tournament, which runs for a single class period.  You can allot CPU
and class time any way you want to do the following:

  (1) Playing a solid game of mastermind

      Each team must report the mean number of guesses they require to
      solve a standard game (4 pegs, 6 colors) over no fewer than 10
      games.  I will give the random seed to use in generating the
      codes in class the day of the tournament.  If you feel that the
      first 10 code using that seed are "unfair", feel free to
      generate and solve more of them (using the same seed).

  (2) Scaling up

      Each team must choose to do at least one of the following, but
      you can do all three if you want:

      Scaling up the number of colors:

        Using a random seed I provide in class, generate codes with 4
        pegs and as many colors as you want, and report the mean
        number of guesses required to solve at least 10 codes.  As
        class progresses, we'll keep a leader board that shows the
        number of colors and the mean number of guesses for the top 5
        teams.  To "win" this challenge you have to be able to solve
        games with more colors than anyone else.  If two or more teams
        max out at the same number of colors, the team with the lowest
        mean number of guesses wins.

      Scaling up the number of pegs:

        This will work exactly like scaling up the number of colors,
        except the number of colors will be fixed at 6 and the number
        of pegs will grow.

      Scaling up both pegs and colors

        This will work much like the other two scaling challenges.
        You will need to report the mean number of guesses over at
        least 10 games.  Let's say you are able to solve 6 pegs and 20
        colors and are the current leader.  For someone to take over
        the top spot, they will have to solve 10 codes with 7 (or
        more) pegs and 20 colors, or 6 pegs and 21 (or more colors).
        That is, to beat the leader you have to solve codes with more
        colors, more pegs, or both, or you have to be able to solve
        the same number of pegs and colors with fewer guesses on
        average. 

  (3) Learning

      In this challenge I will provide two datasets from which to
      learn.  One will be generated using one of the existing biased
      methods already in mmcodes.py, and one will be a new biased
      method.  Your team must pick one of the datasets, apply your
      learning method, and then scale up both the number of pegs and
      the number of colors as described above, and we'll use the same
      method for determining the winner.