Chess AI Research
A chess bot is being built, but the development team (Grant) is unsure how sophisticated the evaluation technique should be that values a given board. The team is equally unsure about the depth to use in the MINIMAX-based search for an action.
The three depth candidates are 2, 4, and 6, referred to as depth_1, depth_2, and depth_3, respectively. A couple of ideas for the evaluation techniques include: 1) counting pieces (eval_1) and 2) weighing piece values as a function of how far they are from the center of the board (eval_2).
The objective of this project is to report on the pairing of search depth and evaluation sophistication. For instance, can we get away with a crude evaluation if we’re allowed to look further ahead?
Setup 0.
The code repo is located here: https://github.com/GarntS/471- proj1
TODO
- Add a third evaluation function (eval_3) that is more sophisticated that the two that have been provided. Don’t forget to give credit where credit is due.
- Three depth options and three evaluation options result in 9 possible chess agents. In order to make your recommendation to the development team, play each of these agents against the others to produce a bar graph of the overall win rates. The bar graph should have 9 ticks on the x-axis with win rates on the y-axis:
- Present your findings on win rates for the various depth-evaluation combinations explored as a screenshot. Submit the screenshot and your code here: [email protected]
LINKS The previous iteration of this project involved building the chess agent in JavaScript by following this tutorial: Brandon Yanofsky’s tutorial, Building a Simple Chess AI, available at https://byanofsky.com/2017/07/06/ building-a-simple-chess-ai/