By Matthew Fendt
I am curious as to whether or not a human opponent in a adversarial video game can be guided to perform a desired action based on textual or verbal cues from the player. For example, in Starcraft, aggressive messages in the beginning of the game from the Zerg player may convince the opponent that the Zerg player is going to perform a “Zerg rush,” or early attack gambit. If the opponent plans for a defense against the Zerg rush, they will likely fend it off, but at the cost of early game development. If the Zerg player then performs a different early game, they would gain value extra time from the fooled opponent.
Starcraft would be a good test environment for the experiment, since it is easy to make a bot to play and give text communication during the game. Also, it is possible to observe the difference in building or unit production the human player takes based on the cue given by the bot. The players would have to be somehow convinced that they bot’s messages are reliable, such as mixing in these cues with real information that the bot gives. The player’s actions would then be compared with how a player would really behave in the simulated situation, and see if they change their gameplay to match the expected results. The baseline would be that the player does not change its behavior based on the message.
If player behavior can be changed by giving messages to the player, designers could use this information to improve AI behavior in video games. This would allow for the AI to act more like a human player.