The manager is the third and final part of the game’s AI. He is responsible for the high-level strategic decisions.
The core of the manager is very simple. He searches the surroundings for potential objectives, divides the benefit of each objective by the cost, then instructs the Navigator to head for the best objective.
Probing of the cart’s environment is currently very simple and naive. A linear search is done through all objects, and those that implement a certain interface (called
InterestingToAI) are considered, provided that they are within a certain distance to the cart. That distance approximately corresponds to the field of view that a human player has on the screen. The linear search is obviously very inefficient, but it is only performed once every second or so, so it’s no big deal. If I ever start using some space partitioning scheme (e.g. a bintree, quadtree or Kd-tree), this is one of the areas that will profit.
The next phase is to determine the benefit of a certain objective. Those benefits are coded per object type: “the benefit of an $n item is n, the benefit of the Super Banana Bomb is 5 …”. These values can vary depending on the AI’s personality, making some characters more aggressive in collecting powerups, others more interested in shopping items. An overruling exception is: if the fuel is low, head for the nearest checkout lane. Again, the definition of “low fuel” can vary per character.
Determining the cost of an objective is trickier. I started with the simplest possible approach: cost equals distance (as the crow flies). A problem with this was that velocity was not taken into account. If an object was close, but behind us, its cost would (incorrectly) be lower than the cost of an object slightly farther away, but ahead of us. To remedy this, I devised a multiplier function, depending on the velocity and the direction that the cart was moving in. In an open space, this all worked out beautifully.
When there were obstacles, however, the scheme broke down in a worse way than I expected, often making very silly decisions. I tried probing the environment. If there was an obstacle in the way, the cost would be doubled. It was not enough, and in the end I decided that the Manager would need a proper pathfinding algorithm as well.
I had already implemented the A* pathfinding algorithm for the Navigator. However, that one only finds the path to one particular objective; for the Manager, I need to determine the path (or at least the distance along the path) to multiple objectives. This called for the classic Dijkstra algorithm, which (since it’s a special case of A*) was fairly easy to implement. Since the Dijkstra already computed the path to all potential objectives, I could do away with A* completely and let the Navigator perform the Dijkstra instead. Each frame, one step of the algorithm is executed. When it’s done, the cart has probably moved closer to another starting waypoint, (slightly) invalidating the results, so we just start all over again. The Manager can, at any time, ask the Navigator for the latest computed distance to any given waypoint.
I noticed that the framerate had taken quite a hit because all of this. It turned out that 7% of the entire frame time was spent determining which waypoint was closest to a given point. Again, a linear search … I killed this one by precomputing a 64 by 64 rectangular grid, the size of the level, each grid point pointing to the nearest waypoint. Now finding the closest waypoint was simply a matter of rounding the current position to the closest grid point, then doing one lookup in the grid. It was simple to implement, and the lookup all but disappeared from the profiling output.
Here’s a video of the resulting behaviour. It is clear that the Driver needs some work still, but the pathfinding works fine. A bit too well, in fact: all four AI carts will always head for the same objective initially, causing a big collision. Taking each other into account would help, but maybe I can get away with some simple randomization as well.
Two other things are new in this video. You can’t have failed to notice the stacks of cans, and the spectacular explosion (albeit unrealistic, but who cares?) when someone runs into them. The neat thing about these is: the entire simulation is done in 2D. The stack is initially dormant, containing some overlapping cans (2D discs). When the stack is hit, the cans are kicked into life, and a force is applied to them (depending on position within the stack, the impact velocity and direction, and some randomness). The z coordinate (height) is simulated independently! As a result, cans can’t fly over each other or over other objects. Did you even notice? The rotation outside of the xy plane (horizontal) is also fake: the angular velocity about two other axes is directly dependent on the speed at which the can is moving. To get the cans to end up in natural positions (upright, upside-down, or on their side), another hack is applied. If the speed is below a certain threshold, we simply round the angle to the nearest multiple of 90 degrees. This trick is visible if you look closely, but I can live with it for now.
The other thing that is new is the warning when you are almost out of fuel. Recall that, if you don’t check out before running out of fuel, you get nothing out of that race. To help you out if you don’t know the level well, some arrows appear that point to the checkout lanes. I might later add similar arrows to indicate your opponents, or maybe even the remaining shopping items. Things are already looking cluttered, though, so I might have to cut down on the amount of debris to emphasize more important things.
Before and after a game ‘session’, there is practically nothing. The game starts right after you have selected the level. When every player has finished, you’re abruptly dropped back into the level selection screen. These aspects are high on the list of priorities and will soon receive some attention.