Game artificial intelligence refers to techniques used in computer and video games to produce the illusion of intelligence in the behavior of non-player characters (NPCs). The techniques used typically draw upon existing methods from the field of artificial intelligence (AI). However, the term game AI is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general.
Since game AI is centered on appearance of intelligence and good gameplay, its approach is very different from that of traditional AI; workarounds and cheats are acceptable and, in many cases, the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where NPCs' otherwise perfect aiming would be beyond human skill.
Contents |
Game playing was an area of research in AI from its inception. In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[1] These were among the first computer programs ever written. Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[2] Work on checkers and chess would culminate in the defeat of Garry Kasparov by IBM's Deep Blue computer in 1997.[3] The first video games developed in the 1960s and early 1970s, like Spacewar!, Pong, and Gotcha (1973), were games implemented on discrete logic and strictly based on the competition of two players, without AI.
Games that featured a single player mode with enemies started appearing in the 1970s. The first notable ones for the arcade appeared in 1974: the Taito game Speed Race (racing video game) and the Atari games Qwak (duck hunting light gun shooter) and Pursuit (fighter aircraft dogfighting simulator). Two text-based computer games from 1972, Hunt the Wumpus and Star Trek, also had enemies. Enemy movement was based on stored patterns. The incorporation of microprocessors would allow more computation and random elements overlaid into movement patterns.
It was during the golden age of video arcade games that the idea of AI opponents was largely popularized, due to the success of Space Invaders (1978), which sported an increasing difficulty level, distinct movement patterns, and in-game events dependent on hash functions based on the player's input. Galaxian (1979) added more complex and varied enemy movements, including maneuvers by individual enemies who break out of formation. Pac-Man (1980) introduced AI patterns to maze games, with the added quirk of different personalities for each enemy. Karate Champ (1984) later introduced AI patterns to fighting games, although the poor AI prompted the release of a second version. The role-playing video game Dragon Quest IV (1990) introduced a "Tactics" system, where the user can adjust the AI routines of non-player characters during battle, a concept later introduced to the action role-playing game genre by Secret of Mana (1993).
Games like Madden Football, Earl Weaver Baseball and Tony La Russa Baseball all based their AI on an attempt to duplicate on the computer the coaching or managerial style of the selected celebrity. Madden, Weaver and La Russa all did extensive work with these game development teams to maximize the accuracy of the games. Later sports titles allowed users to "tune" variables in the AI to produce a player-defined managerial or coaching strategy.
The emergence of new game genres in the 1990s prompted the use of formal AI tools like finite state machines. Real-time strategy games taxed the AI with many objects, incomplete information, pathfinding problems, real-time decisions and economic planning, among other things.[4] The first games of the genre had notorious problems. Herzog Zwei (1989), for example, had almost broken pathfinding and very basic three-state state machines for unit control, and Dune II (1992) attacked the players' base in a beeline and used numerous cheats.[5] Later games in the genre exhibited more sophisticated AI.
Later games have used bottom-up AI methods, such as the emergent behaviour and evaluation of player actions in games like Creatures or Black & White. Façade (interactive story) was released in 2005 and used interactive multiple way dialogs and AI as the main aspect of game.
Games have provided an environment for developing artificial intelligence with potential applications beyond gameplay. Examples include Watson, a Jeopardy-playing computer; and the RoboCup tournament, where robots are trained to compete in soccer.[6]
Many credit the sudden rise of online multiplayer games to the slowed advancement of video game AI, it is often stated that player disappointment at AI encourages developers to avoid making good AI with this being a cycle.
Purists complain that the "AI" in the term "game AI" overstates its worth, as game AI is not about intelligence, and shares few of the objectives of the academic field of AI. Whereas "real" AI addresses fields of machine learning, decision making based on arbitrary data input, and even the ultimate goal of strong AI that can reason, "game AI" often consists of a half-dozen rules of thumb, or heuristics, that are just enough to give a good gameplay experience.
Game developers' increasing awareness of academic AI and a growing interest in computer games by the academic community is causing the definition of what counts as AI in a game to become less idiosyncratic. Nevertheless, significant differences between different application domains of AI mean that game AI can still be viewed as a distinct subfield of AI. In particular, the ability to legitimately solve some AI problems in games by cheating creates an important distinction. For example, inferring the position of an unseen object from past observations can be a difficult problem when AI is applied to robotics, but in a computer game an NPC can simply look up the position in the game's scene graph. Such cheating can lead to unrealistic behavior and so is not always desirable. But its possibility serves to distinguish game AI and leads to new problems to solve, such as when and how to use cheating.
The major limitation to strong AI is the inherent depth of thinking and the extreme complexity of the decision making process. This means that although it would be then theoretically possible to make "smart" AI the problem would take considerable processing power.
Game AI/heuristic algorithms are used in a wide variety of quite disparate fields inside a game. The most obvious is in the control of any NPCs in the game, although scripting is currently the most common means of control. Pathfinding is another common use for AI, widely seen in real-time strategy games. Pathfinding is the method for determining how to get an NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war". Beyond pathfinding, navigation is a sub-field of Game AI focusing on giving NPCs the capability to navigate in their environment, finding a path to a target while avoiding collisions with other entities (other NPC, players...) or collaborating with them (group navigation). Game AI is also involved with dynamic game difficulty balancing, which consists in adjusting the difficulty in a video game in real-time based on the player's ability.
The concept of emergent AI has recently been explored in games such as Creatures, Black & White and Nintendogs and toys such as Tamagotchi. The "pets" in these games are able to "learn" from actions taken by the player and their behavior is modified accordingly. While these choices are taken from a limited pool, it does often give the desired illusion of an intelligence on the other side of the screen.
The many contemporary video games fall under the category of action, first person shooter, or adventure. In most of these types of games there is some level of combat that takes place. The AI's ability to be efficient in combat is important in these genres. A common goal today is to make the AI more human, or at least appear so.
One of the more positive and efficient features found in modern day video game AI is the ability to hunt. AI originally reacted in a very black and white manner. If the player was in a specific area then the AI would react in either a complete offensive manner or be entirely defensive. In recent years, the idea of "hunting" has been introduced; in this 'hunting' state the AI will look for realistic markers, such as sounds made by the character or footprints they may have left behind.[7] These developments ultimately allow for a more complex form of play. With this feature, the player can actually consider how to approach or avoid an enemy. This is a feature that is particularly prevalent in the stealth genre.
Another development in recent game AI has been the development of "survival instinct". In-game computers can recognize different objects in an environment and determine whether it is beneficial or detrimental to its survival. Like a user, the AI can "look" for cover in a firefight before taking actions that would leave it otherwise vulnerable, such as reloading a weapon or throwing a grenade. There can be set markers that tell it when to react in a certain way. For example, if the AI is given a command to check its health throughout a game then further commands can be set so that it reacts a specific way at a certain percentage of health. If the health is below a certain threshold then the AI can be set to run away from the player and avoid it until another function is triggered. Another example could be if the AI notices it is out of bullets, it will find a cover object and hide behind it until it has reloaded. Actions like these make the AI seem more human. However, there is still a need for improvement in this area. Unlike a human player the AI must be programmed for all the possible scenarios. This severely limits its ability to surprise the player.[7]
Another side-effect of combat AI occurs when two AI-controlled characters encounter each other; first popularized in the id Software game DOOM, so-called 'monster infighting' can break out in certain situations. Specifically, AI agents that are programmed to respond to hostile attacks will sometimes attack each other if their cohort's attacks land too close to them.
In the context of artificial intelligence in video games, cheating refers to the programmer giving agents access to information that would be unavailable to the player in the same situation.[8] In a simple example, if the agents want to know if the player is nearby they can either be given complex, human-like sensors (seeing, hearing, etc.), or they can cheat by simply asking the game engine for the player's position. The use of cheating in AI shows the limitations of the "intelligence" achievable artificially; generally speaking, in games where strategic creativity is important, humans could easily beat the AI after a minimum of trial and error if it were not for this advantage. Cheating is often implemented for performance reasons where in many cases it may be considered acceptable as long as the effect is not obvious to the player. While cheating refers only to privileges given specifically to the AI—it does not include the inhuman swiftness and precision natural to a computer—a player might call the computer's inherent advantages "cheating" if they result in the agent acting unlike a human player.[8]
Cheating AI are a well-known aspect of Sid Meier's Civilization series; at high difficulty settings, the player must build his empire from scratch, while the computer's empire receives additional units at no cost and is freed from most resource restrictions. This trend to allow the AI to "cheat" to make up for a lack of AI is often used in many strategy games.