CfP: 3DUI 2016

3DUI 2016
IEEE 11th Symposium on 3D User Interfaces 19th & 20th March Greenville, South Carolina, USA
Call for Papers and Technotes
The IEEE 3DUI 2016 Symposium solicits high quality Papers and Technotes within the scope of 3D UIs. Papers (up to 8 pages) should describe original and mature research results and will typically include some evidence of the value of the research, such as a user evaluation, formal proof, or well-substantiated argument.
Technotes (up to 4 pages) should contain unpublished preliminary results of research, application, design or system work. Technotes do not have the hard requirement of an evaluation. The presentation of novel research is a key requirement, and this includes (be is not limited to) technology, techniques, and systems.
Each Paper or Technote should be classifiable as mainly covering 3D UI Research, Application & Design, or Systems using the following guidelines for each:
Research papers should describe results that contribute to advances in state-of-the-art 3D UI, in particular, in the areas of interaction, novel input devices, human-factors, or algorithms.
Application & Design papers should explain how the authors built novel and/or creative 3D UIs to solve interesting problems. Each Paper should include an evaluation of the use of the 3D UIs in the given application domain.
Systems papers should show results that contribute to advances in state-of-the-art 3D UI technology, software or hardware. Papers should describe how the implementers integrated known techniques and technologies to produce an effective 3D UI system, along with any lessons learned in the process, and include an evaluation of the system such as benchmarking of latency, frame-rate, jitter, accuracy, etc. Simply describing a system without providing appropriate measures does not constitute a satisfactory Systems Paper.
Topics of the symposium include (but are not limited to):
– 3D input devices
– 3D display and sensory feedback (all five senses)
– 3D interaction techniques
– 3D user interface metaphors
– Mobile 3DUIs
– Hybrid 3DUIs
– Non-fatiguing 3DUIs
– Desktop 3DUIs
– 3DUIs for VR and AR
– Evaluation methods for 3DUIs
– Human perception of 3D interaction
– Collaborative 3D interaction
– Software technologies to support 3DUIs
– Applications of 3DUIs: Games, entertainment, CAD, education, etc.
Paper and Technote Submission Dates:
Abstract submissions due (required) – November 24, 2015 (midnight PST)
Paper/Technote submissions due      – November 28, 2015 (midnight PST)
Author notification                 – December 23, 2015 (midnight PST)
Camera-ready Papers/Technotes       – January  10, 2016 (midnight PST)
Note that an abstract must be uploaded prior to the Paper or Technote. This facilitates assigning reviewers, as the review process is on a tight schedule. Authors are strongly encouraged to submit videos of their work as part of their submissions.
Papers and Technotes should be prepared in IEEE VGTC format submitted through the submission web site in PDF format, and will be reviewed by the program committee and external reviewers. Reviewing is double-blind, so submissions (including any videos, etc.) should be suitably anonymized. Accepted Papers and Technotes will be published by IEEE in the official Symposium proceedings. An International award committee will also award the Best Paper and Technote.
The authors of best Papers from 3DUI 2016 will be invited to submit extended versions of their work to the International Journal of Human-Computer Studies (IJHCS) and IEEE Transactions on Visualization and Graphics (IEEE TVCG).
Please note that we welcome extended versions of appropriate work that has been accepted as a Poster at IEEE VR 2016 to be submitted as Papers to 3DUI 2016. However, other combinations (e.g., VR Poster + 3DUI Technote, VR Short Paper + 3DUI Paper, etc.) are not allowed, and will be rejected without review. Please be mindful of the Double-Submissions Policy.
Bruce H. Thomas
Rob Lindeman
Maud Marchal
IEEE 3DUI 2016 Symposium Chairs
Conference Committee
Program Chairs:
Bruce Thomas,    University of South Australia, Australia
Rob Lindeman,    Worcester Polytechnic Institute, USA
Maud Marchal,    IRISA-INSA, France
Contact:    program.chairs[at]3dui.org
Web CHAIR:
Ferran Argelaguet Sanz, INRIA, France
Contact:    webchair[at]3dui.org
Publicity Chairs
Kevin Ponto,     University of Wisconsin at Madison, USA
Yaoping Hu,      University of Calgary, Canada
Contact:    publicity.chairs[at]3dui.org
Posters Chairs:
Amy Banic,       University of Wyoming, USA
Christoph Borst, University of Louisiana at Lafayette, USA
Arindam Dey,     Worcester Polytechnic Institute, USA
Contact:    posters.chairs[at]3dui.org
3DUI Contest Chairs:
Michael Marner, University of South Australia, Australia
Benjamin Weyers, RWTH Aachen, Germany
Ryan P. McMahan, University of Texas at Dallas, USA
Rongkai Guo, Kennesaw State University, USA
Contact:    contest.chairs[at]3dui.org

Predictive Raging in Video Games

By Matthew Fendt

I am curious as to whether or not a human opponent in a adversarial video game can be guided to perform a desired action based on textual or verbal cues from the player.  For example, in Starcraft, aggressive messages in the beginning of the game from the Zerg player may convince the opponent that the Zerg player is going to perform a “Zerg rush,” or early attack gambit.  If the opponent plans for a defense against the Zerg rush, they will likely fend it off, but at the cost of early game development.  If the Zerg player then performs a different early game, they would gain value extra time from the fooled opponent.

Starcraft would be a good test environment for the experiment, since it is easy to make a bot to play and give text communication during the game.  Also, it is possible to observe the difference in building or unit production the human player takes based on the cue given by the bot.  The players would have to be somehow convinced that they bot’s messages are reliable, such as mixing in these cues with real information that the bot gives.  The player’s actions would then be compared with how a player would really behave in the simulated situation, and see if they change their gameplay to match the expected results.  The baseline would be that the player does not change its behavior based on the message.

If player behavior can be changed by giving messages to the player, designers could use this information to improve AI behavior in video games.  This would allow for the AI to act more like a human player.

An Intelligent User Interface to Create Plan-based Game Level Behavior

By Julio Bahamon

Narrative is an important component of modern digital games. As computer graphics have become more sophisticated and realistic, it has become increasingly necessary for game designers to develop new techniques to capture the attention of their audience. One such technique is the introduction of a narrative element as an integral part of the game experience [1]. The ability to effectively tell a story, or allow the user to be immersed and interact with a story thus becomes an important asset in the toolkit of game designers. However, Narrative generation and editing can be very time consuming and effort intensive processes. Moreover, the implementation of a quest or an adventure in a game environment may often result in several months of effort that translate into just a few hours of game play once the game goes live [2]. Factors like these can have a very significant impact on a developer’s ability to deliver a game on time, or deliver it at all. It is in this inherent difficulty presented by the creative process where I find the motivation for an Intelligent User Interface (IUI) that reduces the complexity and effort needed to develop narrative for digital game environments.

The main objective of the proposed IUI would be to provide game designers and content developers with a user interface that takes full advantage of the computational power provided by modern Artificial Intelligence planners –in particular their application as story-generation systems– while insulating the user from the complexities involved in the operation of a planner. It should be our goal to facilitate a process where users do not require any expertise in AI planning (or in AI for that matter) in order to be able to create stories for a digital game environment.

These ideas have theoretical foundation on two areas of Computer Science that have been the focus of much research in digital games and artificial intelligence. The areas are Interactive Narrative and Collaborative Problem Solving. Interactive Narrative focuses on the development of computational models that support algorithms and processes leading to the implementation of systems that can generate stories in a procedural manner. Collaboration can be defined as mutual engagement among participants in a coordinated effort to solve a problem together [3, 4].

The interface should be designed with an emphasis on ease of use and above all with the core objective of insulating the user from the complexity introduced by the planner. Being able to provide an abstraction that hides the details of the planner functionality is paramount. My working hypothesis is that game developers can best benefit from the use of planner technology if they are able to do so without having to master plan description languages or AI concepts. The solution I propose would use a plan-based approach to story generation that enables users to create short narratives without the need to write complete scripts and detailed character actions. The design guidelines of such software could be based on the theories of collaboration previously mentioned and on mixed-initiative techniques that rely on a well-defined collaborative interaction between a human operating the software and an intelligent agent that is tasked with providing assistance.

[1] Mateas, M. and Stern, A. Façade: an experiment in building a fully-realized interactive drama. In Game developers conference, game design track. 2003.

[2] Morningstar, C. and Farmer, F. R. The lessons of Lucasfilm’s habitat, Cyberspace: first steps. 1991.

[3] Roschelle, J. and Teasley, S. D. The Construction of Shared Knowledge in Collaborative Problem Solving. In Computer supported collaborative learning, 69, Springer. 1995.

[4] Roschelle, J. and Teasley, S. D. The construction of shared knowledge in collaborative problem solving. NATO ASI Series F Computer and Systems Sciences, 128:69 – 69. 1994.

[5] Horvitz, E. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the sigchi conference on human factors in computing systems: the chi is the limit, CHI ’99, 159 – 166, New York, NY, USA,  1999. , ACM.

Self-Similarity and the Potential for Fractal-Based Stories

By Stephen Ware

Last weekend I came upon an interesting article about generating random terrain using simple fractals.  If you’ve played games like Minecraft, you’re already familiar with the results of these techniques: surprisingly natural landscapes that are significantly and interestingly different almost every time.

The article points out that fractal-based terrain is simple to implement and scales well because fractals are self-similar.  A self-similar design is one which, if you isolate and magnify a small part, it will look like the larger whole.  The Mandelbrot Set is a famous fractal that you’ve probably seen before.  It looks something like a large circle with smaller circular polyps growing out of the perimeter.  If you zoom in on any of those polyps, you will eventually find an exact image of the larger set.  In short: each small part contains the whole.

I wonder to what extent the principle of self-similarity applies to stories?  What kind of output would we get from a fractal-based story generation algorithm?

The article is dedicated to the Diamond Square Algorithm.  It’s a little too complicated to reproduce in this blog post, but the introduction eases you in by way of a simpler predecessor: the Midpoint Displacement Algorithm.  Imagine a straight line.  Now, take the midpoint of that line and move it a random amount upward or downward.  Repeat that process for the left and right halves of the line: find the midpoint and move it randomly up or down.  Keep dividing the line segments in half and repeating this process, and eventually you get something that looks like the silhouette of a mountain range off in the distance.  This basic principle can be extended to 3 dimensions to get a 3D landscape.

The mountain range silhouette produced by this algorithm is self-similar.  If you zoom in on any one part, it will have the same basic properties as the whole.  They won’t look exactly the same because of the random movements, but essentially each peak of the mountain contains an entire mountain range in itself.  Now, how is this relevant to stories?

Firstly, these self-similar algorithms are very simple to implement.  Obviously, no story generation system that hopes to cover a wide variety of human narratives is ever going to be simple, but we might be able to push the complexity down into the low-level realization of the story while keeping the high-level design simple.  Imagine using midpoint displacement to create the conflict arc of a story—natural rises and falls in intensity as the story progresses.  The Midpoint Displacement Algorithm can be seeded with values to define its basic shape.  In other words, you could start with a slanted line or a curve rather than a straight one.  Freytag’s Triangle seems like a good place to start for a fractal-based intensity arc.

More importantly, fractals scale well.  Novels are made up of chapters, chapters of scenes, etc.  A fractal-based story generator could maintain a single theme or basic structural element throughout the story at large and each constituent part (with enough random variation to avoid stagnation).

More complicated fractal generation methods, like the Perlin noise heightmaps used in Minecraft, allow you to sample any part of the space at any time.  Chunks of a Minecraft map are generated dynamically as you explore them, but they are guaranteed to fit together seamlessly no matter what order you visit them in.  Imagine creating one single, unified story for the entire community of an MMORPG.  This task would be daunting for even an army of human writers, but if the story generation was guided by a fractal, perhaps a truly massive story could be generated that had a single theme, while still maintaining the property that each local element was consistent with its neighbors and fit seamlessly into the whole.

Fractals are useful for terrain generation because they produce realistic looking landscapes.  Their usefulness to story generation will hinge on whether or not they can produce natural and realistic storyscapes.  As story generation research continues to progress, I will be interested to see what role fractals have to play.

Computational Narrative and Games T-CIAIG Issue

(Originally posted here: Computational Narrative and Games T-CIAIG Issue)

IEEE Transactions on Computational Intelligence and AI in Games (T-CIAIG)

Call for papers: Special Issue on Computational Narrative and Games

Special issue editors: Ian Horswill, Nick Montfort and R. Michael Young

Stories in both their telling and their hearing are central to human experience, playing an important role in how humans understand the world around them. Entertainment media and other cultural artifacts are often designed around the presentation and experience of narrative. Even in video games, which need not be narrative, the vast majority of blockbuster titles are organized around some kind of quest narrative and many have elaborate stories with significant character development. Games, interactive fiction, and other computational media allow the dynamic generation of stories through the use of planning techniques, simulation (emergent narrative), or repair techniques. These provide new opportunities, both to make the artist’s hand less evident through the use of aleatory and/or automated methods and for the audience/player to more actively participate in the creation of the narrative.

Stories have also been deeply involved in the history of artificial intelligence, with story understanding and generation being important early tasks for natural language and knowledge representation systems. And many researchers, particularly Roger Schank, have argued that stories play a central organizing role in human intelligence. This viewpoint has also seen a significant resurgence in recent years.

The T-CIAIG Special Issue on Computational Narrative and Games solicits papers on all topics related to narrative in computational media and of relevance to games, including but not limited to:

  • Storytelling systems
  • Story generation
  • Drama management
  • Interactive fiction
  • Story presentation, including performance, lighting, staging, music and camera control
  • Dialog generation
  • Authoring tools
  • Human-subject evaluations of systems
  • Papers should be written to address the broader T-CIAIG readership, with clear and substantial technical discussion and relevance to those working on AI techniques for games. Papers must make sufficient contact with the AI for narrative literature to provide useful insights or directions for future work in AI, but they need not be limited to the documentation and analysis of algorithmic techniques. Other genres of papers that could be submitted include:
  • Documentation of complete implemented systems
  • Aesthetic critique of existing technologies
  • Interdisciplinary studies linking computational models or approaches to relevant fields such as narratology, cognitive science, literary theory, art theory, creative writing, theater, etc.
  • Reports from artists and game designers on successes and challenges of authoring using existing technologies
  • Authors should follow normal T-CIAIG guidelines for their submissions, but clearly identify their papers for this special issue during the submission process. T-CIAIG accepts letters, short papers and full papers. See http://www.ieee-cis.org/pubs/tciaig/ for author information. Extended versions of previously published conference/workshop papers are welcome, but must be accompanied by a covering letter that explains the novel and significant contribution of the extended work.
  • Deadline for submissions: September 21, 2012

Notification of Acceptance: December 21, 2012

Final copy due: April 19, 2013

Expected publication date: June or September 2013