Military simulation
From Wikipedia, the free encyclopedia
Military Simulations, also known informally as wargames, are simulations in which theories of warfare can be tested and refined without the need for actual hostilities. Many professional analysts object to the term wargames as this is generally taken to be referring to the civilian hobby, thus the preference for the term simulation.
Simulations exist in many different forms, with varying degrees of realism. In recent times, the scope of simulations has widened to include not only military but also political and social factors, which are seen as inextricably entwined in a realistic warfare model.
Whilst many governments make use of simulation, both individually and collaboratively, little is known about it outside professional circles. Yet modelling is often the means by which governments test and refine their military and political policies. Military simulations are seen as a useful way to develop tactical, strategical and doctrinal solutions, but critics argue that the conclusions drawn from such models are inherently flawed, due to the approximate nature of the models used.
Contents |
[edit] The simulation spectrum
The term Military Simulation can be used to cover a wide spectrum of activites, ranging from full scale field exercises such as that held annually by the British Army's BATUS training unit at Medicine Hat in Canada [2], to abstract computerised models that can proceed with little or no human involvement such as the RAND designed Strategy Assessment Centre (RSAC)[3].
As a general scientific principle, the most reliable data is produced by actual observation and the most reliable theories are based on it.[4] This is also true in military analysis, where analysts look towards live field exercises and trials as providing data that is likely to be realistic (depending on the realism of the exercise) and verifiable (it has been gathered by actual observation). It can be readily discovered, for example, how long it takes to construct a pontoon bridge under given conditions with given manpower, and this data can then be used to provide norms for expected performance under similar conditions in the future, or to refine the bridge-building process. Of course, any form of training can be regarded as a 'simulation' in the strictest sense of the word (in as much as it simulates an operational environment); however, many if not most exercises are carried out not to test new ideas or models, but to provide the participants with the skills to operate within existing ones.
Full-scale military exercises, or even smaller scale ones, are not always feasible or even desirable. Cost is possibly the biggest factor involved — it is an expensive business to release men and materiel from any standing committments, transport them to a suitable location, and then cover additional expenses such as Petroleum Oil Lubricants (POL) usage, equipment maintenance, supplies and consumables replenishment and other items [5]. In addition, certain warfare models are not amenable to verification using this method: for example, it would be impossible to accurately test an attrition scenario by killing off ones own troops.
Moving away from the Field Exercise, it is often more convenient to test a theory by reducing the level of personnel involvement. Map exercises can be conducted involving senior officers and planners, but without the need to physically move around any troops. These retain some human input, and thus can still reflect to some extent the human imponderables that make warfare so challenging to model, with the advantage of reduced costs and increased accessibility. A map exercise can also be conducted with far less forward planning than a full scale deployment, making it an attractive option for more minor simulations that would not merit anything larger.
Increasing the level of abstraction still further, the Military Simulation moves towards an environment that would be readily recognised by civilian wargamers. This type of simulation can be manual, implying no (or very little) computer involvement, computer-assisted or fully computerised.
Manual simulations have probably been in use in some form since mankind first went to war. The game of Chess can be regarded as a form of military simulation (although its precise origins are debated[6]). In more recent times, the forerunner of modern simulations was the Prussian game Kriegspiel, which appeared around 1811 and is sometimes credited with the Prussian victory in the Franco-Prussian War during the 1870s[7]. It was distributed to each Prussian regiment and they were ordered to play it regularly, prompting a visiting German officer to declare in 1824 "It's not a game at all! It's training for war!"[8]. Eventually so many rules sprang up as each regiment improvised their own variations that two versions of the game came into use. One, known as rigid Kriegspiel, was played by strict adherence to the lengthy rule book. The other, free Kriegspiel, was governed by the decisions of human umpires[9]. Each version had its advantages and disadvantages: rigid Kriegspiel contained rules covering most situations, and the rules were derived from historical battles where those same situations had occurred, making the simulation verifiable and rooted in observable data. However, its prescriptive nature acted against any impulse of the participants towards free and creative thinking. Conversely, free Kriegspiel could encourage this type of thinking, as its rules were open to interpretation by the umpires and could be adapted during operation. This very interpretation, though, tended to negate the verifiable nature of the simulation, as different umpires might well adjudge the same situation in different ways, especially where there was a lack of historical precedent.
The above arguments are still cogent in the modern, computer-heavy military simulation environment. There remains a recognised place for umpires as arbiters of a simulation, hence the persistence of manual simulations in war colleges throughout the world. Both compter-assisted and entirely computerised simulations are common as well, with each being used as required by circumstances. The Rand Corporation is one of the best known designers of Military Simulations for the US Government and Air Force, and one of the pioneers of the Political-Military simulation[10]. Their SAFE (Strategic And Force Evaluation) simulation is an example of a manual simulation, with one or more teams of up to ten participants being sequestered in separate rooms and their moves being overseen by an independent director and his staff. Such simulations may be conducted over a few days (thus requiring commitment from the participants): an intial scenario (for example, a conflict breaking out in the Persian Gulf) is presented to the players with appropriate historical, political and military background information. They then have a set amount of time to discuss and formulate a strategy, with input from the directors/umpires (often called Control) as required. Where more than one team is participating the teams may be divided on partisan lines — traditionally blue and red are used as designations, with blue representing the 'home' nation and red the opposition. In this case the teams will work against each other, their moves and counter-moves being relayed to their opponents by Control, who will also adjudicate on the results of such moves. At set intervals Control will declare a change in the scenario, usually of a period of days or weeks, and present the evolving situation to the teams based on their reading of how it might develop as a result of the moves made. For example, blue team might decide to respond to the Gulf conflict by moving a carrier battle group into the area whilst simultaneously using diplomatic channels to avert hostilities. Red team on the other hand might decide to offer military aid to one side or another, perhaps seeing an opportunity to gain influence in the region and counter blue's initiatives. At this point Control could declare that a week has now passed, and present an updated scenario to the players: possibly the situation has deteriorated further and blue must now decide if they wish to pursue the military option, or alternatively tensions might have eased and the onus now lies on red as to whether or not to escalate by providing more direct aid to their clients[11].
Computer-assisted simulations are really just a development of the manual simulation, and again there are different variants on the theme. Sometimes the computer assistance will be nothing more than a database to help umpires keep track of information during a manual simulation. At other times one or other of the teams might be replaced by a computer-simulated opponent (known as an agent or automaton)[12]. This can reduce the umpires' role to that of interpreter of the data produced by the agent, or even obviate the need for an umpire altogether. Most commercial wargames designed to run on computers (such as Blitzkrieg, the Total War series and even the Civilisation games) fall into this category.
Where both human teams are replaced by agents, the simulation can be fully computerised and, with minimal supervision, left to run by itself. The main advantage of this is the ready accessibility of the simulation — beyond the time required to program and update the computer models, no special requirements are necessary. A fully computerised simulation can be run at virtually any time and in almost any location, the only equipment needed being a laptop computer. There is no need to juggle schedules to suit busy participants, acquire suitable facilities and arrange for their use, or obtain security clearances. An additional important advantage is that a computerised model can perform many hundreds or even thousands of iterations in the time that it would take a manual simulation to run once. This means that statistical information can be gleaned from such a model; outcomes can be quoted in terms of probabilities, and plans developed accordingly.
Removing the human element entirely means that the results of the simulation are only as good as the model itself. Validation thus becomes extremely significant — data must be correct, and must be handled correctly by the model. Various mathematical formulae have been devised over the years to attempt to predict everything from the effect of casualties on morale to the speed of movement of an army in difficult terrain. One of the best known is the Lanchester Square Law formulated by the British engineer Frederick Lanchester in 1914. He expressed the fighting strength of a (then) modern force as proportional to the square of its numerical strength multiplied by the fighting value of its individual units[13]. The Lanchester Law is often known as the attrition model, as it can be applied to show the balance between opposing forces as one side or the other loses numerical strength[14].
[edit] Heuristic or Stochastic?
Another method of categorising military simulations is to divide them into two broad areas.
Heuristic simulations are those that are run with the intention of stimulating research and problem solving; they are not necessarily expected to provide empirical solutions.
Stochastic simulations are those that involve, at least to some extent, an element of chance.
Most military simulations fall somewhere in between these two definitions, although manual simulations lend themselves more to the heuristic approach and computerised ones to the stochastic.
Manual simulations, as described above, are often run to explore a 'what if?' scenario and take place as much to provide the participants with some insight into decision-making processes and crisis management as to provide concrete conclusions. Indeed, such simulations do not even require a conclusion; once a set number of moves has been made and the time alotted has run out, the scenario will finish regardless of whether the original situation has been resolved or not.
Computerised simulations can readily incorporate chance in the form of some sort of randomised element, and can be run many times to provide outcomes in terms of probabilities. In such situations, it is sometimes the unusual results that are of more interest than the expected ones. For example, if a simulation modelling an invasion of nation A by nation B was put through one hundred iterations to determine the likely depth of penetration into A's territory by B's forces after four weeks, an average result could be calculated. Examining those results, it might be found that the average penetration was around fifty kilometers — however, there would also be outlying results on the ends of the probability curve. At one end, it could be that the FEBA is found to have hardly moved at all; at the other, penetration could be hundreds of kilometers instead of tens. The analyst would then examine these outliers to determine why this was the case. In the first instance, it might be found that the computer model's random number generator had delivered results such that A's divisional artillery was much more effective than normal. In the second, it might be that the model generated a spell of particularly bad weather that kept A's air force grounded. This analysis can then be used to make recommendations: perhaps to look at ways in which artillery can be made more effective, or to invest in more all-weather fighter and ground-attack aircraft[15].
[edit] Political-Military simulations
Since Carl von Clausewitz' famous declaration that war is merely a continuation of Politics by other means[16], military planners have attempted to integrate political goals with military goals in their planning with varying degrees of commitment. Post World War II, political-military simulation in the West was initially largely concerned with the rise of the Soviet Union and more recently the global war on terror. It became apparent that, to counter an enemy that was idealogically motivated, politics must be taken into account in any realistic strategic simulation.
This differed markedly with the traditional approach to military simulations. Kriegspiel was concerned only with the movement and engagement of military forces, and the simulations that followed were similarly focussed in their approach. Following the Prussian success in 1866 against Austria at Sadowa, the Austrians, French, British, Italians, Japanese and Russians all began to make use of wargaming as a training tool. The United States was relatively late to adopt the trend, but by 1889 wargaming was firmly embedded in the culture of the US Navy (with the Royal Navy as the projected adversary)[17].
Political-military simulations take a different approach to their purely military counterparts. Since they are largely concerned with policy issues rather than battlefield performance, they tend to be less prescriptive in their operation. However, various mathematical techniques have arisen in an attempt to bring rigour to the modelling process. One of these techniques is known as game theory — a commonly-used method is that of non-zero-sum analysis, in which score tables are drawn up to enable selection of a decision such that each party involved acquires some benefit from the outcome of that decision (essentially producing a win-win stiuation).
It was not until 1954 that the first modern political-military simulation appeared (although the Germans had modelled a Polish invasion of Germany in 1929 that could be fairly labelled political-military[18]), and it was the United States that would elevate simulation to a tool of statecraft. The impetus was US concern about the burgeoning nuclear arms race (the Soviet Union exploded its first nuclear weapon in 1949, and by 1955 had developed their first true 'H' bomb [19]). A permanent gaming facility was created in the Pentagon and various professional analysts brought in to run it, including the social scientist Herbert Goldhamer, economist Andrew Marshall and MIT professor Lincoln P Bloomfield[20].
Noteable US political-military simulations run since World War II include the aforementioned SAFE, STRAW (Strategic Air War) and COW (Cold War)[21]. The typical political-military simulation is a manual or computer-assisted heuristic-type model, and many research organisations and think-tanks throughout the world are involved in providing this service to governments. During the cold war, the Rand Corporation and the Massachusets Institute of Technology, amongst others, ran simulations for the Pentagon that included modelling the Vietnam War, the fall of the Shah of Iran, the rise of pro-commmunist regimes in South America, tensions between India, Pakistan and China, and various potential flashpoints in Africa and South-East Asia[22]. Both MIT and Rand remain heavily involved in US military simulation, along with institutions such as Harvard, Stanford, and the National Defense University. Other nations have their equivalent organisations, such as Cranfield Institute's Defence Academy (formerly the Royal Military College of Science) in the United Kingdom.
Participants in the Pentagon simulations were sometimes of very high rank, including members of Congress and White House insiders as well as senior military officers[23]. The identity of many of the participants remains secret even today. It is a tradition in US simulations (and those run by many other nations) that participants are guaranteed anonymity. The main reason for this is that occasionally they may take on a role or express an opinion that is at odds with their professional or public stance (for example portraying a fundamentalist terrorist or advocating hawkish military action), and thus could harm their reputation or career if their in-game persona became widely known. It is also traditional that in-game roles are played by participants of an equivalent rank in real life, although this is not a hard-and-fast rule and often disregarded[24]. Whilst the major purpose of a political-military simulation is to provide insights that can be applied to real-world situations, it is very difficult to point to a particular decision as arising from a certain simulation — especially as the simulations themselves are usually classified for years, and even when released into the public domain are sometimes heavily censored. This is not only due to the unwritten policy of non-attribution, but to avoid disclosing sensitive information to a potential adversary. This has been true within the simulation environment itself as well — former US president Ronald Regan was a keen visitor to simulations conducted in the 1980s, but as an observer only. An official explained: "No president should ever disclose his hand, not even in a war game"[25].
Political-military simulations remain in widespread use today: modern simulations are concerned not with a potential war between superpowers, but more with international cooperation, the rise of global terrorism and smaller brushfire conflicts such as those in Kosovo, Bosnia, Sierra Leone and the Sudan. An example is the MNE (Multinational Experiment) series of simulations that have been run from the Ataturk Wargaming, Simulation and Culture Center in Istanbul over recent years. The latest, MNE 4, took place in early 2006. MNE includes participants from Australia, Canada, Finland, France, Germany, Sweden, the United Kingdom, the North Atlantic Treaty Organization (NATO) and the United States, and is designed to explore the use of diplomatic, economic and military power in the global arena[26].
[edit] Simulation and reality
Ideally military simulations should be as realistic as possible — that is, designed in such a way as to provide measurable, repeatable results that can be confirmed by observation of real-world events. This is especially true for simulations that are stochastic in nature, as they are used in a manner that is intended to produce useful, predictive outcomes. Any user of simulations must always bear in mind that they are, however, only an approximation of reality, and only as accurate as the model itself.
[edit] Validation
In the context of simulation, validation is the process of testing a model by supplying it with historical data and comparing its output to the known historical result. If a model can reliably reproduce known results, it is considered to be validated and assumed to be capable of providing predictive outputs (within a reasonable degree of uncertainty).
Developing realistic models has proven to be somewhat easier in Naval simulations than on land[27]. One of the pioneers of naval simulations, Fletcher Pratt, designed his "Naval War Game" in the late 1930s, and was able to validate his model almost immediately by applying it to the encounter between the German pocket battleship Admiral Graf Spee and three British cruisers in the Battle of the River Plate off Montevideo in 1939. Rated on thickness of armour and gunpower, Graf Spee should have been more than a match for the lighter cruisers, but according to Pratt's formula, the cruisers should have won… which they did[28].
In contrast, many modern operations research models have proven unable to reproduce historical results when they are validated; the Atlas model, for instance, in 1971 was shown to be incapable of achieving more than a 68% correspondence with historical results[29]. Trevor Dupuy, a prominent American historian and military analyst known for airing often controversial views, has said that "many OR analysts and planners are convinced that neither history nor data from past wars has any relevance".[30] In Numbers, Predicions, and War, he implies a model that cannot even reproduce a known outcome is little more than a whimsy, with no basis in reality.
Historically, there have even been a few rare occasions where a simulation was validated as it was being carried out. One notable such occurence was just before the famous Ardennes offensive in World War II, when the Germans attacked allied forces during a period of bad weather in the winter of 1944, hoping to reach the port of Antwerp and force the Aliies to sue for peace. According to German General Friedrich J Fangor, the staff of Fifth Panzeramree had met in November to game defensive strategies against a simulated American attack. They had no sooner begun the exercise than reports began arriving of a strong American attack in the Hűrtgen area — exactly the area they were gaming on their map table. Generalfeldmarschal Walther Model ordered the participants (apart from those commanders whose units were actually under attack) to continue playing, using the messages they were receiving from the front as game moves. For the next few hours simulation and reality ran hand-in-hand: when the officers at the game table decided that the situation warranted commitment of reserves, the commander of the 116th Panzer Division was able to turn from the table and issue as operational orders those moves they had just been gaming. The division was mobilised in the shortest possible time, and the American attack was repulsed[31].
Validation is a particular issue with political-military simulations, since much of the data produced is subjective. One controversial doctrine that arose from early post-WWII simulations was that of signalling — the idea that by making certain moves, it is possible to send a message to your opponent about your intentions: for example, by conspicuously conducting field exercises near a disputed border, a nation indicates its readiness to respond to any hostile incursions. This was fine in theory, and formed the basis of East-West interaction for much of the cold war, but was also problematic and dogged by criticism. An instance of the doctrine's shortcomings can be seen in the bombing offensives conducted by the United States during the Vietnam War. US commanders decided, largely as a result of their Sigma simulations, to carry out a limited bombing campaign against selected industrial targets in North Vietnam. The intention was to signal to the North Vietnamese high command that, whilst the United States was clearly capable of destroying a much greater proportion of their infrastructure, this was in the nature of a warning to scale down involvement in the South 'or else'. Unfortunately, as an anonymous analyst said of the offensive (which failed in its political aims), "they either didn't understand, or did understand but didn't care."[32] It was pointed out by critics that, since both Red and Blue teams in Sigma were played by Americans — with common language, training, thought processes and background — it was relatively easy for signals sent by one team to be understood by the other. Those signals, however, did not seem to translate well across the cultural divide.
[edit] Problems of simulation
Many of the criticisms directed towards military simulations derive from an incorrect application of them as a predictive and analytical tool. The outcome supplied by a model relies to a greater or lesser extent on human interpretation and therefore should not be regarded as providing a 'gospel' truth. However, whilst this is generally understood by most game theorists and analysts, it can be tempting for a layman — for example, a politician who needs to present a 'black and white' situation to his electorate — to settle on an interpretation that supports his preconceived position. Tom Clancy, in his novel Red Storm Rising, illustrated this problem when he had one of his characters, who was attempting to persuade the Soviet Politburo that the Warsaw Pact could win a conflict with NATO, use as evidence the results of a simulation carried out to model just such an event. It is revealed in the text that there were in fact three sets of results from the simulation; a best-, intermediate- and worst-case outcome. The advocate of war chose the best-case outcome, thus distorting the model to support his case[33]. This fictional scenario may however have been based on fact. The Japanese extensively wargamed their planned expansion during World War II, but map exercises conducted before the Pacific War were frequently stopped short of a conclusion where Japan was defeated. One often-cited example prior to the battle of Midway had the umpires magically resurrecting a Japanese carrier that was sunk during a simulation, although Professor Robert Rubel argues in the Naval War College Review that their decision was justified in this case given improbable rolls of the dice[34]. There were however equally illustrative fundamental problems with other areas of the simulation, mainly relating to a Japanese unwillingness to consider their position should the element of surprise, on which the Midway operation depended, be lost[35].
Tweaking simulations to make results conform with current political or military thinking is a recurring problem. In US Naval exercises in the 1980s, it was informally understood that no high-value units such as aircraft carriers were allowed to be sunk[36], as naval policy at the time concentrated its tactical interest on such units. The outcome of one of the largest ever NATO exercises, Ocean Venture-81, in which around 300 naval vessels including two carrier battle groups were adjudged to have successfully traversed the Atlantic and reached the Norwegian Sea despite the existence of a (real) 380-strong Soviet submarine fleet as well as their (simulated) red-team opposition, was publically questioned in Proceedings, the professional journal of the US Naval Institute[37]. The US Navy managed to get the article classified, and it remains secret to this day, but the article's author and chief analyst of Ocean Venture-81, Lieutenant Commander Dean L Knuth, has since claimed that two Blue aircraft carriers were successfully attacked and sunk by Red forces[38].
There have been many charges over the years of computerised models, too, being unrealistic and slanted towards a particular outcome. Critics point towards the case of military contractors, seeking to sell a weapons system. For obvious reasons of cost, weapons systems (such as an air-to-air missile system for use by fighter aircraft) are extensively modelled on computer. Without testing of their own, a potential buyer must rely to a large extent on the manufacturer's own model. This might well indicate a very effective system, with a high kill probability (Pk). However, it may be that the model has been set up to show the weapons system under ideal conditions, and its actual operation effectiveness will be somewhat less than stated. The US Air Force quoted their AIM-9 Sidewinder missile as having a Pk of 0.98 (ie it will successfully kill 98% of targets it is fired at). However, during the Falklands War in 1982, in operational use the British recorded its actual Pk as 0.78[39].
Another factor that can render a model invalid is human error: one notorious example was the US Air Force's Advanced Penetration Model, which due to a programming error made US bombers invulnerabe to enemy air defences by inadvertently altering their latitude or longitude when checking their location for a missile impact. This had the effect of 'teleporting' the bomber, at the instant of impact, hundreds or even thousands of miles away, causing the missile to miss. Furthermore, this error went unnoticed for a number of years[40]. Other unrealistic models have had battleships consistently steaming at seventy knots (twice their top speed), an entire tank army halted by a border police detachment, and attrition levels 50% higher than the actual numbers each force began with[40].
Issues of enemy technical capability will also affect any model used. Whilst the modeller can expect to create a reasonably accurate picture of their own nation's military capability, discovering valid data for an opponent may be extremely difficult. As Len Deighton famously pointed out in Spy Story, if the enemy has an unanticipated capacity, it may render tactical and strategic assumptions so much nonsense.
Human factors have been a constant thorn in the side of the designers of military simulations — whereas political-military simulations are often required by their nature to grapple with what are referred to by modellers as squishy problems, purely military models often seem to prefer to concentrate on hard numbers. Whilst a warship can be regarded, from the perspective of a model, as a single entity with known parameters (speed, armour, gunpower and the like), land warfare often depends on the the actions of small groups or individual soldiers where training, morale and personalities come into play. For this reason it is more taxing to model — there are many variables that are difficult formulate. Commercial wargames, both the tabletop and computer variety, often attempt to take these factors into account: in Rome: Total War, for example, units will generally rout from the field rather than stay to fight to the last man. One valid criticism of some military simulations is that these nebulous human factors are often ignored (partly because they are so hard to model accurately, and partly because no commander likes to acknowledge that men under his command may disobey him). In recognition of this shortcoming, military analysts have in the past turned to civilian wargames as being more rigorous, or at least more realistic, in their approach to warfare. In the United States, James F Dunnigan, a prominent student of warfare and founder of the commercial tabletop wargames publisher Simulations Publications Incorporated (SPI, now defunct), was brought into the Pentagon's wargaming circle in 1980 to work with Rand and Science Applications Incorporated (SAI) on the development of a more reaslistic model[41]. The result, known as SAS (Strategic Analysis Simulation), is still being used[42].
All the above means that models of warfare should be taken for no more than they are: a non-prescriptive attempt to inform the decision-making process. The dangers of treating military simulation as gospel are illustrated in an anecdote circulated at the end of the Vietnam War, which was intensively gamed between 1964 and 1969 (with even President Lyndon Johnson being photographed standing over a wargaming sand table at the time of Khe Sanh) in a series of simulations codenamed Sigma[43]. The period was one of great belief in the value of military simulations, riding on the back of the proven success of operations research or OR (known as operational anaylsis in the UK, as OR is used for operational requirements) during World War II[44] and the growing power of computers in handling large amounts of data.
The story concerned a fictional aide in Richard Nixon's administration, who, when Nixon took over government in 1969, fed all the data held by the US that pertained to both nations into a computer model — population, gross national product, relative military strength, manufacturing capacity, numbers of tanks, aircraft and the like. The aide then asked the question of the model: "When will we win?" Apparently the computer replied "You won in 1964!"[45]
[edit] Further Reading
- Thomas B Allen, War Games: Inside the Secret World of the Men who Play at Annihilation, New York, McGraw Hill, 1987, ISBN 0749300116
- Trevor N Dupuy, Numbers, Predictions and War: The Use of History To Evaluate and Predict the Outcome of Armed Conflict, Revised Edition, Fairfax VA, Hero Books, 1985, ISBN 0672521318
- David Halberstam, The Best and the Brightest, Ballantine Books, 1993, ISBN 0449908704
- Andrew Wilson, The Bomb and the Computer, London, Barrie & Rockliff, Cresset P, 1968, ISBN 0214667278
- Harry G Summers, On Strategy: A Critical Analysis of the Vietnam War, Presidio Press, 1982, ISBN 0891415637
- John Bayliss, Strategy in the Contemporary World: An Introduction to Strategic Studies, Oxford University Press, 2002, ISBN 019878273X
- Carl von Clausewitz, On War, Wordsworth, 1997, ISBN 1853264822
[edit] Links to external sites
- National Strategic Gaming Centre
- Rand's National Security Research Division
- Strategy.com's Wargames page The site is edited by James F Dunnigan, founder of wargames company Simulations Publications Inc (SPI)
[edit] References
- ^ J G Taylor, Modeling and Simulation of Land Combat, ed L G Callahan, Georgia Institute of Technology, Atlanta, GA, 1983
- ^ BATUS official site
- ^ H E Hall, Norman Shapiro, Herbert J Shukiar, Overview of RSAC system software : a briefing, RAND Corporation, 1993, [1]
- ^ The three steps of the Scientific Method: Observation, Hypothesis, Experimantation
- ^ Center for International Policy, programme of US Military Exercises "Exercises are generally the largest, in terms of cost and personnel, of the many types of U.S. military deployments for training..."
- ^ The origins of Chess
- ^ Matthew Caffrey, History of Wargames: Toward a History Based Doctrine for Wargaming, 2000, StrategyPage.com: "There was near universal agreement that Prussia's victories were due to generalship. This advantage in generalship was produced by her War College and her general staff system, and behind the success of both stood wargaming."
- ^ Andrew Wilson, The Bomb and the Computer, London, Barrie & Rockliff, Cresset P, 1968, ISBN 0214667278, p6
- ^ Edgardo B Matute, Birth and Evolution of War Games, Military Review 50, No7, 1970, p53
- ^ Thomas B Allen, War Games: Inside the Secret World of the Men who Play at Annihilation, New York, McGraw Hill, 1987, ISBN 0749300116, p141
- ^ ibid, pp11-20 The author describes a wargame he was permitted to observe
- ^ ibid, p328
- ^ F W Lanchester, Aircraft in War: The Dawn of the Fourth Arm, Lanchester Pr Inc, 1999, ISBN 157321017X A reprint of the original 1916 issue
- ^ A F Karr, Lanchester Attrition Processes and Theater-Level Combat Models, Mathematics of Conflict, Elsevier Science Publishers B V, 1983, ISBN 0444866787
- ^ Thomas B Allen, War Games: Inside the Secret World of the Men who Play at Annihilation, New York, McGraw Hill, 1987, ISBN 0749300116, p332
- ^ Carl von Clausewitz (trans. J J Graham), On War, Wordsworth, 1997, ISBN 1853264822
- ^ Thomas B Allen, War Games: Inside the Secret World of the Men who Play at Annihilation, New York, McGraw Hill, 1987, ISBN 0749300116, p120
- ^ ibid, p122
- ^ Nuclear Weapon Archive's history of Soviet nuclear development
- ^ Thomas B Allen, War Games: Inside the Secret World of the Men who Play at Annihilation, New York, McGraw Hill, 1987, ISBN 0749300116, p148
- ^ ibid, p152
- ^ ibid
- ^ ibid, ch 11: Red and Blue in the White House
- ^ ibid, p33
- ^ ibid, p215
- ^ USJFCOM MNE 4 overview
- ^ Thomas B Allen, War Games: Inside the Secret World of the Men who Play at Annihilation, New York, McGraw Hill, 1987, ISBN 0749300116, p123
- ^ Fletcher Pratt, Fletcher Pratt's Naval War Game, New York, Harrison-Hilton Books, 1940, Out of print
- ^ Trevor N Dupuy, Numbers, Predicions, and War, Indianaoplis, IN: Bobbs-Merrill Company, 1979, ISBN 0-672-52131-8, p57
- ^ ibid, p41
- ^ Thomas B Allen, War Games: Inside the Secret World of the Men who Play at Annihilation, New York, McGraw Hill, 1987, ISBN 0749300116, p129
- ^ ibid
- ^ Tom Clancy, Red Storm Rising, HarperCollins , 1988, ISBN 0006173624
- ^ Robert C Rubel, The epistemology of war gaming, Naval War College Review, Spring 2006 on findarticles.com
- ^ Mitsuo Fuchida, Midway: The Battle That Doomed Japan, Naval Institute Press (new edition), 2001, ISBN 1557504288, pp96-97
- ^ Thomas B Allen, War Games: Inside the Secret World of the Men who Play at Annihilation, New York, McGraw Hill, 1987, ISBN 0749300116, p288
- ^ ibid, p289
- ^ Professor Roger Thompson, Professor of Military Studies, Knightsbridge University, in a discussion paper: Is the US Navy Overrated?, 2005, listed in the German Armed Forces Institute for Social Research (Sozialwissenschaftliches Institut Der Bundeswehr) Database on Military Sociological Studies
- ^ Thomas B Allen, War Games: Inside the Secret World of the Men who Play at Annihilation, New York, McGraw Hill, 1987, ISBN 0749300116, p290
- ^ a b ibid, p317
- ^ ibid, p93
- ^ James F Dunnigan, Wargames at War: Wargaming and the Professional Warriors, Strategypage.com
- ^ David Halberstam, The Best and the Brightest, Ballantine Books, 1993 (20th anv edition), ISBN 0449908704
- ^ Andrew Wilson, The Bomb and the Computer, London, Barrie & Rockliff, Cresset P, 1968, ISBN 0214667278
- ^ Harry G Summers, On Strategy: A Critical Analysis of the Vietnam War, Presidio Press, 1982, ISBN 0891415637