Balancing the next Cyborg Mice game
The Power of Game Balancing by Automating Benchmarking: 12,100 for the price of 1
While we haven’t officially announced our next game, it has been in the prototyping phase for the past few years and has been constantly evolving, devolving and mutating into completely different beasts. Here is a peek into the development process for our upcoming game.
Without giving away any details, we finally settled on a workable game mechanic and needed to test and balance the various powers in the game. I added a CPU vs CPU mode (just like there is in Street Fighter 4) which ran real time and most matches lasted about 5 minutes. This was great for one-offs but for it would take far too long to properly benchmark all the various powers against each other.
I then wrote an automated testing system that ran each of the 11 different powers against each other. When running 100 matches for each test case, there were a total of 12,100 (11 x 11 x 100) simulated matches. How long does all this take? About 5 minutes which is about the same time as watching one real time game play out. Freakin’ beautiful.
Here are the results from the final test:
So what do all the numbers mean? CPU 1 is GREEN and CPU 2 is RED. As you can see when an AI is matched up against another AI using the same power, it’s about equally represented by YELLOW. The most powerful power is number 7 which is something I had predicted before even writing the benchmark system.
So it looks like the powers are now properly balanced and should offer a good challenge for players as they work up their way to the higher level powers. If this benchmarking was performed real time it would have taken 10 hours to test each power against each other just ONE time and it wouldn’t have provided sufficient data for accurate testing and balancing.
The lesson to learn here is to perform accelerated automation whenever possible.