By Matthew Fendt
I am curious as to whether or not a human opponent in a adversarial video game can be guided to perform a desired action based on textual or verbal cues from the player. For example, in Starcraft, aggressive messages in the beginning of the game from the Zerg player may convince the opponent that the Zerg player is going to perform a “Zerg rush,” or early attack gambit. If the opponent plans for a defense against the Zerg rush, they will likely fend it off, but at the cost of early game development. If the Zerg player then performs a different early game, they would gain value extra time from the fooled opponent.
Starcraft would be a good test environment for the experiment, since it is easy to make a bot to play and give text communication during the game. Also, it is possible to observe the difference in building or unit production the human player takes based on the cue given by the bot. The players would have to be somehow convinced that they bot’s messages are reliable, such as mixing in these cues with real information that the bot gives. The player’s actions would then be compared with how a player would really behave in the simulated situation, and see if they change their gameplay to match the expected results. The baseline would be that the player does not change its behavior based on the message.
If player behavior can be changed by giving messages to the player, designers could use this information to improve AI behavior in video games. This would allow for the AI to act more like a human player.
By Julio Bahamon
Narrative is an important component of modern digital games. As computer graphics have become more sophisticated and realistic, it has become increasingly necessary for game designers to develop new techniques to capture the attention of their audience. One such technique is the introduction of a narrative element as an integral part of the game experience . The ability to effectively tell a story, or allow the user to be immersed and interact with a story thus becomes an important asset in the toolkit of game designers. However, Narrative generation and editing can be very time consuming and effort intensive processes. Moreover, the implementation of a quest or an adventure in a game environment may often result in several months of effort that translate into just a few hours of game play once the game goes live . Factors like these can have a very significant impact on a developer’s ability to deliver a game on time, or deliver it at all. It is in this inherent difficulty presented by the creative process where I find the motivation for an Intelligent User Interface (IUI) that reduces the complexity and effort needed to develop narrative for digital game environments.
The main objective of the proposed IUI would be to provide game designers and content developers with a user interface that takes full advantage of the computational power provided by modern Artificial Intelligence planners –in particular their application as story-generation systems– while insulating the user from the complexities involved in the operation of a planner. It should be our goal to facilitate a process where users do not require any expertise in AI planning (or in AI for that matter) in order to be able to create stories for a digital game environment.
These ideas have theoretical foundation on two areas of Computer Science that have been the focus of much research in digital games and artificial intelligence. The areas are Interactive Narrative and Collaborative Problem Solving. Interactive Narrative focuses on the development of computational models that support algorithms and processes leading to the implementation of systems that can generate stories in a procedural manner. Collaboration can be defined as mutual engagement among participants in a coordinated effort to solve a problem together [3, 4].
The interface should be designed with an emphasis on ease of use and above all with the core objective of insulating the user from the complexity introduced by the planner. Being able to provide an abstraction that hides the details of the planner functionality is paramount. My working hypothesis is that game developers can best benefit from the use of planner technology if they are able to do so without having to master plan description languages or AI concepts. The solution I propose would use a plan-based approach to story generation that enables users to create short narratives without the need to write complete scripts and detailed character actions. The design guidelines of such software could be based on the theories of collaboration previously mentioned and on mixed-initiative techniques that rely on a well-defined collaborative interaction between a human operating the software and an intelligent agent that is tasked with providing assistance.
 Mateas, M. and Stern, A. Façade: an experiment in building a fully-realized interactive drama. In Game developers conference, game design track. 2003.
 Morningstar, C. and Farmer, F. R. The lessons of Lucasfilm’s habitat, Cyberspace: first steps. 1991.
 Roschelle, J. and Teasley, S. D. The Construction of Shared Knowledge in Collaborative Problem Solving. In Computer supported collaborative learning, 69, Springer. 1995.
 Roschelle, J. and Teasley, S. D. The construction of shared knowledge in collaborative problem solving. NATO ASI Series F Computer and Systems Sciences, 128:69 – 69. 1994.
 Horvitz, E. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the sigchi conference on human factors in computing systems: the chi is the limit, CHI ’99, 159 – 166, New York, NY, USA, 1999. , ACM.
By Stephen Ware
Last weekend I came upon an interesting article about generating random terrain using simple fractals. If you’ve played games like Minecraft, you’re already familiar with the results of these techniques: surprisingly natural landscapes that are significantly and interestingly different almost every time.
The article points out that fractal-based terrain is simple to implement and scales well because fractals are self-similar. A self-similar design is one which, if you isolate and magnify a small part, it will look like the larger whole. The Mandelbrot Set is a famous fractal that you’ve probably seen before. It looks something like a large circle with smaller circular polyps growing out of the perimeter. If you zoom in on any of those polyps, you will eventually find an exact image of the larger set. In short: each small part contains the whole.
I wonder to what extent the principle of self-similarity applies to stories? What kind of output would we get from a fractal-based story generation algorithm?
The article is dedicated to the Diamond Square Algorithm. It’s a little too complicated to reproduce in this blog post, but the introduction eases you in by way of a simpler predecessor: the Midpoint Displacement Algorithm. Imagine a straight line. Now, take the midpoint of that line and move it a random amount upward or downward. Repeat that process for the left and right halves of the line: find the midpoint and move it randomly up or down. Keep dividing the line segments in half and repeating this process, and eventually you get something that looks like the silhouette of a mountain range off in the distance. This basic principle can be extended to 3 dimensions to get a 3D landscape.
The mountain range silhouette produced by this algorithm is self-similar. If you zoom in on any one part, it will have the same basic properties as the whole. They won’t look exactly the same because of the random movements, but essentially each peak of the mountain contains an entire mountain range in itself. Now, how is this relevant to stories?
Firstly, these self-similar algorithms are very simple to implement. Obviously, no story generation system that hopes to cover a wide variety of human narratives is ever going to be simple, but we might be able to push the complexity down into the low-level realization of the story while keeping the high-level design simple. Imagine using midpoint displacement to create the conflict arc of a story—natural rises and falls in intensity as the story progresses. The Midpoint Displacement Algorithm can be seeded with values to define its basic shape. In other words, you could start with a slanted line or a curve rather than a straight one. Freytag’s Triangle seems like a good place to start for a fractal-based intensity arc.
More importantly, fractals scale well. Novels are made up of chapters, chapters of scenes, etc. A fractal-based story generator could maintain a single theme or basic structural element throughout the story at large and each constituent part (with enough random variation to avoid stagnation).
More complicated fractal generation methods, like the Perlin noise heightmaps used in Minecraft, allow you to sample any part of the space at any time. Chunks of a Minecraft map are generated dynamically as you explore them, but they are guaranteed to fit together seamlessly no matter what order you visit them in. Imagine creating one single, unified story for the entire community of an MMORPG. This task would be daunting for even an army of human writers, but if the story generation was guided by a fractal, perhaps a truly massive story could be generated that had a single theme, while still maintaining the property that each local element was consistent with its neighbors and fit seamlessly into the whole.
Fractals are useful for terrain generation because they produce realistic looking landscapes. Their usefulness to story generation will hinge on whether or not they can produce natural and realistic storyscapes. As story generation research continues to progress, I will be interested to see what role fractals have to play.
(Originally posted here: Computational Narrative and Games T-CIAIG Issue)
IEEE Transactions on Computational Intelligence and AI in Games (T-CIAIG)
Call for papers: Special Issue on Computational Narrative and Games
Special issue editors: Ian Horswill, Nick Montfort and R. Michael Young
Stories in both their telling and their hearing are central to human experience, playing an important role in how humans understand the world around them. Entertainment media and other cultural artifacts are often designed around the presentation and experience of narrative. Even in video games, which need not be narrative, the vast majority of blockbuster titles are organized around some kind of quest narrative and many have elaborate stories with significant character development. Games, interactive fiction, and other computational media allow the dynamic generation of stories through the use of planning techniques, simulation (emergent narrative), or repair techniques. These provide new opportunities, both to make the artist’s hand less evident through the use of aleatory and/or automated methods and for the audience/player to more actively participate in the creation of the narrative.
Stories have also been deeply involved in the history of artificial intelligence, with story understanding and generation being important early tasks for natural language and knowledge representation systems. And many researchers, particularly Roger Schank, have argued that stories play a central organizing role in human intelligence. This viewpoint has also seen a significant resurgence in recent years.
The T-CIAIG Special Issue on Computational Narrative and Games solicits papers on all topics related to narrative in computational media and of relevance to games, including but not limited to:
- Storytelling systems
- Story generation
- Drama management
- Interactive fiction
- Story presentation, including performance, lighting, staging, music and camera control
- Dialog generation
- Authoring tools
- Human-subject evaluations of systems
- Papers should be written to address the broader T-CIAIG readership, with clear and substantial technical discussion and relevance to those working on AI techniques for games. Papers must make sufficient contact with the AI for narrative literature to provide useful insights or directions for future work in AI, but they need not be limited to the documentation and analysis of algorithmic techniques. Other genres of papers that could be submitted include:
- Documentation of complete implemented systems
- Aesthetic critique of existing technologies
- Interdisciplinary studies linking computational models or approaches to relevant fields such as narratology, cognitive science, literary theory, art theory, creative writing, theater, etc.
- Reports from artists and game designers on successes and challenges of authoring using existing technologies
- Authors should follow normal T-CIAIG guidelines for their submissions, but clearly identify their papers for this special issue during the submission process. T-CIAIG accepts letters, short papers and full papers. See http://www.ieee-cis.org/pubs/tciaig/ for author information. Extended versions of previously published conference/workshop papers are welcome, but must be accompanied by a covering letter that explains the novel and significant contribution of the extended work.
- Deadline for submissions: September 21, 2012
Notification of Acceptance: December 21, 2012
Final copy due: April 19, 2013
Expected publication date: June or September 2013