Home || About/Contact || Resume || Articles || LinkedIn
Download the current PC build here, and the current Gear VR build here.
Tools: Unity, C#, XML, Google Draw
Role: System Designer, Technical Designer
Team Size: Myself
Development Period: December 2016 – Present
A pitch I made for my submission to the GDC Experimental Gameplay Workshop. The video gives a basic overview of the sentence-forming experience.
During the holidays at the end of 2016, I decided to use my spare time to implement an idea for a prototype. I had been thinking about a more interesting way for the player to engage in conversation with characters, as one of the most prevalent “active” choices, dialogue trees, have been used in video games since at least 1982, with Enix’s Portopia Serial Murder Case. Regrettably, though, simulating a conversation in gaming hasn’t really evolved much since then, aside from some interesting experiments like “conversation bosses” in Deus Ex: Human Revolution.
Portopia Serial Murder Case (1983) was one of the first games allowing the player to choose questions to ask a character from a menu, effectively a dialogue tree.
Using my understanding of Goal-Oriented Action Planning (GOAP) AI, I thought about giving characters key-value pairs to represent emotions, rather than goals. Statements the player gave would have values attached to them that would affect the character’s emotions. Finally, I had the idea of breaking sentences into individual clauses, allowing the player create a statement by making a combination of different fragments. Finished statements would then just be the sum of their emotion values.
A diagram showing the sentence-forming mechanic.
Another idea that interested me was allowing the player to make a statement, as well as the opposite of the statement (such as “This is true”, as well as “This is not true”). This lead to the creation of “conjunctions”, small fragments that would come between clauses. Instead of having emotion values, conjunctions would have a “modifier” that the values of the clause after it would be multiplied by. For instance, in the diagram above, the statement “I know that Claudio did kill your brother” would apply +6 Anger and +2 Sadness to the character the player is talking with, while saying “I know that Claudio did not kill your brother” would apply -6 Anger and -2 Sadness.
Another aspect borrowed from GOAP AI is the method that allows the character to select their next statement. After the values of the player’s sentence are applied to the character’s emotion values, the character searches through a set of responses branching off from the current point in the conversation. Each of these have a set of “target values” that match the mood of the statement they’re attached to. The response whose target values are closest to the NPC’s actual emotion values is then selected, advancing the conversation.
A diagram showing how an NPC selects the next branch of a conversation.
Given the complexity of this system, I knew that hardcoding conversations in this format would be extremely inefficient and messy. Fortunately, I had prior experience using XML in projects, so I knew that creating a data-driven solution wouldn’t be an issue. After defining a format for conversations split into “nodes”, I created a test XML file, and implemented functions to parse it alongside writing the code for the core of the in-game system.
<conversation startNode ="0" happiness ="3" sadness ="8" anger="6" confusion="8"> <node id="0" response="Stand tall! Ambition's debt is paid!" branchingNodes="1,2,3" targetHappiness="3" targetSadness="8" targetAnger="6" targetConfusion="8"> <fragment text="Really? " type="firstClause" modifier="1"> <emotionValues happiness ="0" sadness ="0" anger="0" confusion="1"></emotionValues> </fragment> <fragment text="Absolutely! " type="firstClause" modifier="1"> <emotionValues happiness ="-1" sadness ="0" anger="0" confusion="0"></emotionValues> </fragment> <fragment text="It looks like you " type="conjunction" modifier="1"></fragment> <fragment text="It doesn't look at all like you " type="conjunction" modifier="-1"></fragment> <fragment text="killed Caesar." type="secondClause" modifier="1"> <emotionValues happiness ="-2" sadness ="1" anger="2" confusion="0"></emotionValues> </fragment> <fragment text="saved Rome." type="secondClause" modifier="1"> <emotionValues happiness ="2" sadness ="1" anger="-1" confusion="0"></emotionValues> </fragment> <fragment text="killed Rome." type="secondClause" modifier="1"> <emotionValues happiness ="-1" sadness ="0" anger="2" confusion="1"></emotionValues> </fragment> </node> ... </conversation>
An example of a “node” in a conversation’s XML file. A node contains its target emotion values, the NPC’s line, the IDs of nodes branching off from this one, and the sentence fragments with their emotion values and modifiers.
I was also interested in simulating other aspects of conversation, such as looking at or away from someone. Having received a Gear VR headset recently, I figured that an eye contact system would be a good use of the technology. Knowing that public speakers and performers might look at an audience member before switching to another, I created logic to emphasize the impact of a sentence a player made if they were looking at the NPC they were delivering it to. However, staring at a person for too long can make them uncomfortable. Because I was told a good rule of thumb was to look away every 6 to 8 seconds, the player will make the NPC’s emotion values go up for negative emotions if they look at them longer.
A short video explaining how the eye contact system works in the game, as well as how I overcame some obstacles porting the game to VR.
What’s next?
Probably the most pressing thing I should do next is create a more fully-realized example level, so that the player can get a better sense of what problems this system would enable them to solve. I’ve gone back and forth on the setting: I initially conceived this as part of an autobiographical game about growing up with Asperger’s Syndrome. I relented, though, and steered towards a comedic take on Act 3 of Julius Caesar, but I’m thinking I might go back to the initial concept.
Lastly, in a development environment, I can’t imagine that narrative designers would be too thrilled having to write all their conversations into a long, linear XML file. I’m looking at several approaches to creating an in-editor tool that would be able to read, modify, and export conversation files in a visual scripting approach.