Presentation is here

The Problem:

The last topic covered by my AI programming class has been finite state machines, a code structure that switches AI between behaviors depending on the current “state” of the AI. States are usually paired with a couple of actions that the AI can take: a “Flee” state in a shooter enemy AI, for example, could include a run, duck, or roll action. In addition, states can be transitioned between when certain criteria are met. The AI from before may be able to exit its “Flee” state if it heals itself enough, for example.

Pac-Man Ghost State Machine

 

Fig 1: An example state machine diagram for a ghost in Pac-Man

There are two pretty notable issues with this approach, though:

1) Increasing Complexity: For every state that exists, the AI programmers and enemy designers need to decide whether any combination of two states can be connected by a transition, or if State A can transition to State B, but not the other way around, or if these states have no transition between them. While this is easy to manage in an AI with five states (such as the one above in Fig. 1), any state added could potentially add up to (n -1) * 2 connections, where n is the total number of states.

Fig. 2: An example of a hierarchical state machine from Halo 2.

This can be mitigated by having branches in the state machine that have no connections between them, or having a hierarchical state machine (Fig. 2), where there are a few states in the machine that all have a number of “sub-states” within them. These approaches, however, lead to the second problem:

2) Inflexibility: Just like with design work, it is impossible for developers to anticipate every single scenario that an AI may run into. While it may be safe to assume that an enemy would not need to immediately switch from a “Swimming” state to a “Shooting” state, there may be some unforeseen edge cases where that may be necessary. Removing connections may reduce complexity in the short term, but could lead to some very odd-looking behavior when the game is played, or worse, lead to the AI getting practically broken.

OrkinFSMvsGOAP

Fig. 3: A comparison between GOAP (left) and State Machines (right) by Jeff Orkin

Jeff Orkin from Monolith provides a very concrete example of this: in No One Lives Forever, enemies that were sitting down at a desk could be shot. However, because of the state machine’s architecture, shot sitting enemies would have to stand up and push their chair in before being able to play their death animation. These sorts of problems lead Orkin to devise a new approach on one of Monolith’s next games, F.E.A.R., an approach he dubbed “Goal-Oriented Action Planning” (GOAP for short).

Defining GOAP, and its advantages:

In GOAP, each type of AI has a set number of goals which have an “insistence”, which is represented by a number. The higher the insistence number, the more the AI will want to work towards accomplishing the goal- a goal with an insistence of 9 will have a higher preference than a goal with an insistence of 4. Each AI type also has a number of actions given to them that they can use, and carrying out those actions will lower the insistence of the appropriate goal. A concrete example of this could be reloading: as an enemy is firing at the player, its insistence for the Reload goal could rise to 8. The enemy’s AI then considers its actions, and sees that the “Reload” action has the potential to lower the insistence of the goal by 6. The AI then acts on the “Reload” action, lowering the Reload goal’s insistence to 2, and refilling its gun’s magazine.

This first step is pretty basic: have the AI prioritize its goals and then act on them. The really impressive part comes with the introduction of A-Star. A-Star (A*) is usually thought of only in terms of pathfinding- it searches through a spatial graph and returns a path that the AI can travel along. As Orkin points out in his initial GOAP paper, though, A* can be easily adapted for searching through all the possible actions an AI could take (Fig. 4). In this model, actions that the AI could take would serve as connections, and create a “path” of behaviors that would lead towards a goal, usually called a Plan or a Schedule.

GOAP and A-Star Comparison

Fig. 4: A diagram comparing GOAP’s application of A* to that of pathfinding.

But in this application of A*, what would the nodes be analogous to? This is where the concept of World States comes in: World States are just a collection of variables that an AI has that represent their understanding of the world. Actions can have prerequisites that need to be met before they can be used, which they will use the world state to check. For example, a reload action may require that the “Ammo” variable in an enemy’s world state is below 10- the “Fire” action would cause that to happen, opening the action up to them.

Using world states, we can give the AI a more complex goal by handing it a world state with specific variable values to achieve. When looking at a sequence of actions, GOAP uses a duplicate world state that it changes in order to keep track of its progress towards the goal. While in pathfinding A* uses a distance calculation as a heuristic to see which node should be the next in the path, GOAP can use the difference between the discontentment in the current world state and the goal world state in order to see which next action is most efficient.

One last thing about the use of A* is that, since actions are essentially acting as connections between world states, they can be assigned different costs. In pathfinding, connections between nodes can have a higher cost in order to indicate a path that is less efficient, implicitly influencing which route an AI will take. In this way, we can make GOAP prefer taking one action over another- we can have an enemy prefer firing at the player over throwing grenades at them, for example. If the enemy does not have a direct line-of-sight with the player, though, the grenade might be the more viable option.

Disadvantages to GOAP:

As cool as I think GOAP is (which is really cool), it should be noted that, like with everything else, there are disadvantages with it.

1) Large Resource Usage

GOAP uses a lot of processing power when creating a plan- in Artificial Intelligence for Games, Ian Millington and John Funge point out that the number of possibilities that need to be searched is equal to number of actions ^ maximum actions in a plan. This can be mitigated by using more particular pathfinding algorithms- Millington and Funge’s book references using Iterative Deepening A* (IDA*), a modified version of A* which is slightly more efficient for planning. The other big possibility is to limit the maximum number of actions in a plan- while making a plan with 20 actions may be impressive, it’s rare that an AI will be able to carry the whole thing out, and a plan with only 6 actions will still probably look impressive to the player.

2) Genre Specificity

It’s because of its processor intensity that GOAP is not preferable for more fast-paced gameplay. While first developed for first-person shooters, this approach may prove too slow for games with more “twitchy” playstyles, including fighting and racing games, where “insect” AI may prove more advantageous. Conversely though, this makes GOAP much more preferable for use in slower-paced games, such as real-time strategies and especially turn-based games.

3) Less Direct Control

While letting the AI decide what to do at runtime may save many headaches, it allows less direct control over what actions may or may not follow each other. Because GOAP is looking for the most optimal plan to reach its goal, it may repeatedly decide on reusing only several behaviors, which could be frustrating, get stale, or break immersion for the player. If the enemy design is also trying to establish a rhythm for the player, more effort may be required to ensure that actions are only executed at proper points in time.

Because of this, most games that utilize GOAP still have a simplified state machine to allow the AI to engage in more basic actions or scripted behaviors- Orkin points out that F.E.A.R. relies on a three-state model, where only one state actually allows the use of planning. Half-Life 2, which came out the same year, had a method that was strikingly close to GOAP; the only difference was that each state in the state machine would define the set of actions an enemy could take. Another advantage of using a basic state machine is the ability to interrupt plans so that the AI can formulate new ones- if all planning activities are kept to one state, then the game can force a transition to another state when an action in the plan fails, requiring the character to make another plan.

While it’s not suited for every game, and it’s extremely rare to see a game depend solely on it, GOAP is a valid alternative to the more common state machine approach. Not only is it very interesting to study, it may be one of the biggest advancements within the past decade of AI programming.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s