O'Reilly logo

Programming Game AI by Example by Mat Buckland

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

Desirability = Tweaker *
Raven_Feature::Health(pBot) *
Raven_Feature::TotalWeaponStrength(pBot);
//bias the value according to the personality of the bot
Desirability *= m_dCharacterBias;
}
return Desirability;
}
If your game design requires that the bots’ personalities persist between
games, you should create a separate script file for each bot containing the
biases (plus any other bot character-specific data, such as weapon aiming
accuracy, weapon selection preferences, etc.). There are no bots of this type
in Raven, however; each time you run the program the bots’ desirability
biases are assigned random values in the constructor of
Goal_Think, like so:
//these biases could be loaded in from a script on a per bot basis
//but for now we'll just give them some random values
const double LowRangeOfBias = 0.5;
const double HighRangeOfBias = 1.5;
double HealthBias = RandInRange(LowRangeOfBias, HighRangeOfBias);
double ShotgunBias = RandInRange(LowRangeOfBias, HighRangeOfBias);
double RocketLauncherBias = RandInRange(LowRangeOfBias, HighRangeOfBias);
double RailgunBias = RandInRange(LowRangeOfBias, HighRangeOfBias);
double ExploreBias = RandInRange(LowRangeOfBias, HighRangeOfBias);
double AttackBias = RandInRange(LowRangeOfBias, HighRangeOfBias);
//create the evaluator objects
m_Evaluators.push_back(new GetHealthGoal_Evaluator(HealthBias));
m_Evaluators.push_back(new ExploreGoal_Evaluator(ExploreBias));
m_Evaluators.push_back(new AttackTargetGoal_Evaluator(AttackBias));
m_Evaluators.push_back(new GetWeaponGoal_Evaluator(ShotgunBias,
type_shotgun));
m_Evaluators.push_back(new GetWeaponGoal_Evaluator(RailgunBias,
type_rail_gun));
m_Evaluators.push_back(new GetWeaponGoal_Evaluator(RocketLauncherBias,
type_rocket_launcher));
z
TIP Goal arbitration is essentially an algorithmic process defined by a handful of
numbers. As a result, it is not driven by logic (like an FSM) but by data. This is
hugely advantageous because all you have to do to change the behavior is
tweak the numbers, which you may prefer to keep in a script file so that other
members of your team can easily experiment with them.
State Memory
The stack-like (LIFO) nature of composite goals automatically endows
agents with a memory, enabling them to temporarily change behavior by
pushing a new goal (or goals) onto the front of the current goal’s subgoal
list. As soon as the new goal is satisfied it will popped from the list and the
406 | Chapter 9
Spin-offs
agent will resume whatever it was doing previously. This is a very power
-
ful feature that can be exploited in many different ways.
Here are a couple of examples.
Example One — Automatic Resuming of Interrupted Activities
Imagine that Eric, who is on his way to the smithy, gold in pocket, is set
upon by a thief with a Rambo knife. This occurs just before he reaches the
third waypoint of the path he is following. His brain’s subgoal list at this
point resembles Figure 9.10.
Eric didn’t expect this to happen, but fortunately the AI programmer has
created a goal for dealing with just this sort of thing called DefendAgainst-
Attacker. This goal is pushed onto the front of his subgoal list and remains
active until the thief either runs away or is killed by Eric. See Figure 9.11.
Goal-Driven Agent Behavior | 407
Spin-offs
Figure 9.10
Figure 9.11
The great thing about this design is that when DefendAgainstAttacker is
satisfied and removed from the list, Eric automatically resumes following
the edge to waypoint three.
Some of you will probably be thinking “Ah, but what if while chasing
after the thief Eric loses sight of waypoint three?” Well, that’s the fantastic
thing about this design. Because the goals have built-in logic for detecting
failure and for replanning, if a goal fails the design moves backward up
through the hierarchy until a parent is found that is capable of replanning
the goal.
Example Two — Negotiating Special Path Obstacles
Many game designs necessitate that agents are capable of negotiating one
or more types of path obstacles, such as doors, elevators, drawbridges, and
moving platforms. Often this requires the agent to follow a short sequence
of actions. For example, to use an elevator an agent must find the button
that calls it, walk toward the button, press it, and then walk back and stand
in front of the doors until the elevator arrives. Using a moving platform is a
similar process: The agent must walk toward the mechanism that operates
the platform, press/pull it, walk to the embarking point, wait for the plat-
form to arrive, and finally, step onto the platform and wait until it gets to
wherever it’s going. See Figure 9.12.
These “obstacles” should be transparent to the path planner since they are
not barriers to an agent’s movement. It takes time to negotiate them of
course, but this can be reflected in the navgraph edge costs.
408 | Chapter 9
Spin-offs
Figure 9.12. An agent uses a moving platform to cross a pit of fire. A) The agent walks
to the button and presses it. B) The agent walks back and waits for the platform to
arrive. C) The agent steps on the platform and remains stationary as it travels across
the fiery pit. D) The agent continues on its way.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required