I recently saw a link to this blog entry by Christer Ericson (of SCEA), which is a pretty amusing rip on an academic who has tut-tutted the game development community about poor AI. (Definitely check out the video linked in the blog entry to get the proper context for this entry.)
I frequently see gamers calling for more time, attention, and resources to be spent on “AI.” Unfortunately, I think the term has been diluted, such that it is now an umbrella term for a wide variety of techniques and problems, ranging from pathfinding to scripted sequences all the way to learning AIs. This may be a shocking statement to some, but I feel that what academia views as AI (tilted more towards the latter end of the scale) is not appropriate for many types of games. Actually, let me rephrase that: AI is not a good value proposition for many game development projects — the “bang for the buck” ratio is poor.
Why is this? Here are a couple of factors that come into play:
- It’s hard to schedule “interesting AI,” partly because it can be very dependent on other systems being complete or usable to be functional at all.
- It is difficult to quantify the benefit of “better” AI. As a matter of fact, sometimes it’s hard to decide the direction in which to move towards a “better” AI!
- The more complicated an AI is, the harder it tends to be to test and verify. (Not always true, but this is frequently the case.)
I’m going to use the crude term “proper AI” to refer to AI more complicated than state machines and rand(), usually incorporating more academic and simulation-oriented techniques, to differentiate it from the typical, highly pragmatic game AI approaches. I feel that “proper” AI is most important for games that:
- skew towards the simulation end of the spectrum — games that require realistic behavior in a wide variety of circumstances that cannot easily be pre-programmed.
- have plausible ways of demonstrating different AI behaviors in meaningful ways for a player. If the player can’t perceive AI, they can’t appreciate it.
In a corridor-based shooter, for example, developing “proper” AI (capable of operating in any area of the game, with any weapon, etc.) won’t significantly enhance the gameplay experience, and so it should be shelved in favor of scripted encounters. The genre’s conventions usually dictate enemies who will be alive for no longer than 30 seconds (of fun), maximum. I don’t care what kind of “proper” AI you put into a game actor — if their lifespan is less than 30 seconds, they aren’t going to show off any whiz-bang behavior that will be appreciated by the player before they get mowed down. It’s like meeting someone in real life, where you can only get a superficial impression of someone in 30 seconds, built up from visual appearances as well as maybe a few words or sentences exchanged. Fortunately, the appearance of doing something intelligent (through scripting), combined with other cues (audio and gestures), is just as effective at projecting the illusion of an intelligent adversary as “proper” AI. Eliza could probably hold up to 30 seconds of scrutiny — for many games, that’s perfectly adequate.
For other genres, though, “proper” AI can be quite important. For example, in a flight simulator game that includes combat, satisfying gameplay is very dependent on quality AI. The perception that an enemy (or ally) is cheating in a game of this type can really ruin the player experience, and yet the AI must be competent enough to provide a sufficient challenge to a player. Simulating pilot (and electronic) perception, squad tactics and communication, and air combat doctrine may be critical goals (particularly for a military simulation).
Another genre which pretty obviously demands better AI is that of Sims-type games. The player spends a lot of time watching and interacting with agents that are supposed to be simulated human beings, so they better act in realistic and interesting ways. No surprises here.
The roundabout point that I am trying to get at is that I think it’s most useful to think of AI in games as serving the needs of a specific gameplay idea, not something that’s inherently fun by itself. It seems like many attempts at creating more “realistic” or “better” AIs for games lose sight of the fact that they don’t really make a game much better — AI researchers get too attached to the idea of “it’s really thinking!” and lose sight of the real goal of game development, which is to make fun games. Likewise, I think that forum dwellers who cajole developers to put in “better AI” aren’t seeing the whole picture either. Game developers will use better AI approaches when they can be demonstrated to better meet the needs of gameplay, and are sufficiently well-understood to lower production risks — it’s that simple.