All about the AI


The computer AI uses a MINIMAX search with Alpha-Beta pruning to decide what a computer player will do. It searches up to 5 levels deep (stopping after that and at each level randomly stopping 1/3rd of the time) and then evaluates the board, based upon the positions ant types of the pieces that both players have. The search works on a list (which each function generates at its call) of all the allowed moves for a specified color (all the moves that does not leave the player in check), which is generated by going through all of the squares of the board (including the attack boards) and getting every piece's (again including the attack board) of the specified color's legal moves. After the list of legal moves is generated, I go throught the list and check if the move leaves the player is in check, and removes the moves which do leave the player in check. When a player has no moves that do not leave the player in check, the game is over. Originally, I was checking to see if the player was in check by getting all of the other player's moves, and seeing if any of them leads to the position of the king, but (late one night) came to a realization of a better method: get all of that player's king's moves as all of the other piece types, and see if it can get to any of the same type of piece for the other color (ex. get a white king's moves as a white knight, and see if it can get to the position of a black knight). The operation of getting all of the legal moves is quite expensive, and I had originally tried to cache the list in a balanced tree structure, which stored all of each color's allowed moves for a given board configuration. Unfortunately, when we started testing the AI we discovered that the tree quickly grew too large (5 moves * and average branching factor of 10 is 100,000 board configurations per turn), so we eventually switched to a hash table structure that stored the last 10,100 board configurations looked at. Most of these configurations are never looked at more than once, but we decided that this relatively small table (about 2 megs if completely full and usually only half full because we only usually get one color's moves per board configuration) was worth the savings in computation time. Computing the allowed moves for this game is much more complicated than for regular chess because of the overlapping squares and because the positions of some squares (the attack boards) can actually change between moves. The expensiveness of calculating allowed moves is also why we instituted the randomdepth cutoff, since we wanted the game to choose moves in reasonable time (<30 seconds).
The evaluation function is loosely based on that for normal chess games (see GNU chess), but also contains information about the positions of the pieces on attack boards and the positions of the attack boards themselves and a random factor. We initially tried just evaluating the pieces, but had enormous problems getting the game to do anyting at the begining of the game since it takes some time before we can actually take pieces (we have to get all the pieces off an attack board, move it near the middle, and move pieces to the middle board where they could do something interesting to each other. Initially, the game move4d a few pieces and then just moved a piece repeatedly back and forth between two squares. We also added points for pieces not on their home board and for attackboards in a suefull position, and these factors in addition to the random weight makes the computer player actually do interesting thing.
After we had the initial evaluation function, we created a set of 10 random weights (set of weights containing weights for the pawn, rook, knight, bishop, queen, control of attack boards, and the random factor) and used a genetic-type algorithm on them to choose them. The algorithm worked as follows: We played each set of weights against each other set for 400 moves (200 for each player), randomly choosing which set was black and which set was white, and looked at what happened after it was done. If the game was finished (rare - about once in a "generation" or group of weights) we gave the set that won 2 points and the set that lost -2 points. If neither player won, we evaluated the board based on an arbitrary set of weights (actually close to standard chess weights), and if that came out close (within 1 o 2 pawns) evaluated based on the state of the middle board (a player that dominated the middle board dominated the game), and gave 1 point to the winner and -1 point to the loser based on this evaluation. If the middle-board evaluation was also close (within 1 or 2 pawns), we declared a tie.
We started the first generation off with 10 sets of weights that were randomly chosen but close to standard chess weights. After the end of a "generation", we looked at the scores of each set of weights and computed a fitness function based upon the score, and used this to make the next generation. We chose the next generation by taking the 3 sets with the highest scores and allowing them to survive, and also "cross-breeding" the weights by choosing a pivot and switching everything after the pivot of the set. This gave us 9 sets of weights, so we also added a random new set of weights. We also allowed the weights to "mutate", by giving each individual weight a 1 in 20 chance of going up by 1, going down by 1, or staying th same. After 10 generations, we found that the sets weights had pretty much converged (the same except for the random set and the mutations), and used that set for the final game. This resulted in weights that should be better than most of the others tried and that should lead to interesting games (In which the computer actually does something instead of moving one piece back and forth between 2 positions). The weights that resulted had a Pawn weight of 3, which was the highest allowed value for it. We think that this was probably because of the fact that the game was of limited length, so a computer that took a pawn scored better. The Knight and Bishop weights converged to 5, and the Rook weight converged to 6, which are close to standard chess weights and about what we expected. The Queen weight converged to 8, which we think is a little low, and we think that it's because if the queen had a high weight, then the computer player would decide that it could be taken easily, so wouldn't take any risks with it. The Attackboard weight converged to 1, which we think is because the standard evuation function used at the end of the games didn't look at them. The random weight converged to 6, which was right in the middle of the allowed range, which means to us that we guessed right at the range for its possible values.
Keep in mind that given that it takes a long time to compute moves, it would be difficult to create a computer player that behaved optimally in all situations. We had considered using a special start-game evaluation that would work very well for the begining of the game, but decided that we didn't want to constrain the computer player too much and that it would be difficult to decide exactly when it needed to be called, since there would be situations where, in the middle of the game, a player could be confined to it's own board and need to call it again, and that we did want a general purpose function that would work reasonably well in all situations. The result is a computer player that will be interesting to watch and that will probably play better than an average human player that doen't know the subleties of how the game works very well.
Back to the main game page