Ubiquitous command and control
From Wikipedia, the free encyclopedia
Ubiquitous command and control (UC2, pronounced "you see too" – pun intended) is a concept for the future of command and control. UC2 is a refinement and evolution of the thesis for Network Centric Warfare (NCW). The UC2 position looks to achieve "unity with diversity", and offers scope for extreme robustness.
Contents |
[edit] History and method of UC2 development
The concept was first presented by Dale Lambert in a paper called "Ubiquitous Command and Control" at the Information, Decision and Control conference in Adelaide, Australia in 1999. See references below.
The concept was subsequently further developed and presented by Dale Lambert and Jason Scholz in a paper called "A Dialectic for Network Centric Warfare" at the International Command and Control Research and Technology Symposium (ICCRTS) in McLean, Virginia, USA in 2005. See references below.
UC2 was developed as a new synthesis, using a Hegelian dialectic method. Stage one of the method analyses a dogmatic thesis being held by a community. Stage two is development of an antithesis - a criticism of that thesis which has led some to its denial. Stage three develops a synthesis, formed by unification of the thesis and antithesis, while attempting to avoid the myopic dispositions of each.
The 1999 paper used the US "Cooperative Engagement Capability" as a thesis. The 2005 paper used "Network Centric Warfare" as the thesis.
This work was performed by the Command and Control Division, Defence Science and Technology Organisation (DSTO), Australia.
[edit] Seven core tenets
- Decision Devolution enables the social collective to decide, rather than governing individuals, in order to benefit from the diversity of expertise.
- Ubiquity of C2 offers extreme robustness through agreements between similar, rather than identical, C2 capabilities on every platform.
- Automation provides the basis for ubiquity by extending intrinsic human capabilities with automated semantic and cognitive decision makers and aids.
- Integration between people and machines is managed through mixed initiative strategies and by equipping cognitive machines with storytelling technologies.
- Distributed locations allow seamless virtual integration with the robustness of physical diversity and Decentralised intention provides unity through mission agreements with robustness through a diversity of underlying intentions.
- Social Coordination among people and machines in a collective can be flexibly achieved through automated social agreement protocols and social policies.
- Management levels naturally arise from commonalities of location and intention.
[edit] Decision devolution
1: Decision Devolution enables the social collective to decide, rather than governing individuals, in order to benefit from the diversity of expertise.
Decision devolution aligns with the "power to the edge" sentiments expressed by NCW practitioners. Decision devolution is founded upon the idea that additional individuals or entities are not always required to govern collectives. When appropriately equipped, collectives can sometimes govern themselves. In the military context, this signals dynamic liaisons adaptively forming from operational assets without the oversight of a command headquarters.
The conduct of military operations without the oversight of a command headquarters is of course an anathema to current military practice, and might well foster allegations of heresy. But large-scale collectives can successfully operate without a ruling class and is exemplified by such Internet sites as Wikipedia, eBay, YouTube, Geocaching, MySpace, Chat rooms, Instant messaging, Digg, and Second Life.
Command involves the creative expression of intent to another. Control involves the expression of a plan to another, and the monitoring and correction of the execution of that plan. Processes akin to these operate within eBay on a significant scale, without the oversight of a ruling class. Command resembles the vendor expressing intention of sale, with any member of the collective potentially being a vendor. Control resembles the process by which the purchaser acquires the sale item, with any member of the collective potentially being a purchaser. Control works in eBay because the collective is largely self-monitoring and self-correcting via a peer rating scheme. Customer satisfaction with each transaction is recorded and made visible to all in the collective. Ideally, this monitoring mechanism then facilitates correction, by steering prospective purchasers away from exposed historically fraudulent vendors.
The potential benefits of decision devolution are flexibility and redundancy. Flexibility can arise through the ability to share the load throughout the collective, often through mobile platforms. Redundancy ensues because the conduct of military operations can still proceed even if its command centre becomes inoperative.
[edit] Ubiquity
2: Ubiquity of C2 offers extreme robustness through agreements between similar, rather than identical, C2 capabilities on every platform.
The ubiquity tenet argues: (i) for a C2 component on every platform; and (ii) that these components should be similar, not identical.
[edit] Graceful degradation
A C2 component on every platform allows command and control to degrade gracefully under strike by reconfiguring C2 among the remaining assets.
In the Information Age, C2 centres have become the enemy’s center of gravity (military), and are therefore the prime targets for precision strike. In defending against precision strike, one approach is to build a duplicate C2 centre. The neutralisation of the C2 centre is then less catastrophic, as the duplicate centre can assume its function. But redundancy offers only one level of reprieve. By enabling C2 functionality to re-configure as necessary, ubiquity offers greater sustainability, by enabling the quality of defence to degrade gracefully, rather than instantaneously, under the threat of surgical strike. In principle, defeating a UC2 system amounts to defeating all of its assets.
[edit] Agreement
UC2 advocates the use of similar components. There are three interpretations of "common":
[edit] Common as identity
The first, "common as identity", involves disseminating an identical picture to each person in the collective. This mistakes identity for unity. The Great Irish Potato Famine of the 1840s led to a significant number of deaths and refugees. It resulted from a uniform dependency on an identical food source (potatoes) that became infected. The distribution of an identical picture beckons analogous concerns, as an infected picture might ensure everyone has the wrong understanding. Another drawback with "common as identity" is that not everyone wants to see an identical picture. Different individuals are interested in different aspects of the environment and at different levels of granularity.
[edit] Common as consistency
The second interpretation is "common as consistency". Instead of disseminating an identical picture, consistent databases and/or information feeds are disseminated. This allows different individuals to generate their own picture of interest from the same underlying consistent information. But consistency is not as desirable as it first seems.
For example, if a data fusion system receives assertion α from source X and assertion not(α) from equally trusted source Y, then which assertion should be entered into the consistent database? If the wrong one is entered, then the wrong information is propagated to every individual in the environment. "Common as consistency" lacks robustness because it eliminates diversity. It is worth examining the sources of inconsistency in order to appreciate the magnitude of the second difficulty. There are at least three:
- A first source is error. The errors may be mechanical or human.
- A second origin of inconsistency is conceptualization. Not all inconsistencies derive from someone or something incorrectly registering the way the world is. Some inconsistencies arise because the world can be more than one way.
- A third origin of inconsistency is partiality. We frequently need to make assumptions in order to make information more complete. For example: Upon receipt of a consistent report, individual X may form a consistent theory by adding assumption α to the report, while individual Y may form a consistent theory by adding assumption not(α) to the report. X and Y then have mutually inconsistent consistent theories. So, an attempt to maintain awareness in the face of partial information can lead to mutually inconsistent consistent theories.
Inconsistencies are inevitable in any NCW system. The ‘common as consistency’ approach of pretending that they will not occur is an untenable solution. ‘Common as consistency’ mistakes consistency for unity and lacks robustness because consistency eliminates diversity.
[edit] Common as agreement
Having similar, rather than identical components, offers a balance between unity and diversity, in the spirit of synthesis.
"Common as agreement" allows individuals to harbour both public and private views, with the former being a product of agreement with other individuals, while the latter retains alternatives should they be required.
In the previous example, under the weight of public opinion, individual Y might be persuaded to accept some statement α, but is free to privately retain his or her reasons for endorsing not(α). This might subsequently prove to be invaluable if it turns out that not(α) is in fact correct. Inconsistencies should be managed, not discarded. Agreement facilitates social unity while retaining the robustness of diversity.
An agreement procedure provides a robust method for information management.
This model of social cooperation is not practically sustainable without the availability of procedures for conflict resolution. Democratic or other procedures may be employed to allow individuals some input into the resolution of their disputes. The Social Coordination tenet later describes protocols to achieve this arbitration. These protocols may embody authority within the machine, not unlike the example of eBay described earlier.
[edit] Automation
3: Automation provides the basis for ubiquity by extending intrinsic human capabilities with automated semantic and cognitive decision makers and aids.
Automation is the primary mechanism for acquiring a similar C2 capability on every platform.
Some decision-making can be fully automated. Other aspects will perform better with human interaction, with the choice between the two being mediated empirically. This promotes the role of automated decision makers and automated decision aids within UC2 systems, with a similarity in C2 components emanating from a similarity in the automated decision makers and aids. The automated decision aids will vary in their reliance on human cognition, ranging from elementary structured interfaces through to complex decision advisory systems.
The automation tenet argues that some expertise should be automated through software, and indeed, that this is the mechanism by which ubiquity might be achievable.
[edit] Automated decision making
The prospect of automated decision-making in a military context is controversial. Some might contend on moral grounds that military operations should be immune from the automation progression otherwise evident in society. There are two responses to this. First, automation will proceed in military operations whether or not it should. In 2000 the US congress ordered that a third of the ground vehicles and a third of the deep-strike aircraft in the military must become robotic within a decade. The DARPA Grand Challenge is illustrative of the progress. Second, there is a case for including automation within military weaponry. Automobiles rival wars as a contributor to human death, and yet the automobile industry is one of the leaders in integrating automated decision-makers. Much of the manufacturer’s motivation is to make automobiles safer. A similar motivation could apply in a military context. If a missile that has been instructed to destroy a train bridge observes or is informed of a passenger train traversing that bridge as it approaches, then one would want the missile to exercise moral judgment and defer its strike on the bridge until after the passenger train has departed the scene. This might be achieved by building in Rules of Engagement (ROE) into the missile that ensure conformance with national moral intent.
[edit] Ubiquity through automation
The advantage of automated software expertise is that it is easily replicated, adapted and distributed. The benefit is that automated software expertise is more readily transferable, which enables the ubiquity of C2 capability.
The encapsulation of expertise in software will gain in currency as two mind set changes become more pronounced. The first is an acceptance of semantic machines. Computers are so named because they were conceived during a wartime calculation boom as rapid number crunching devices (eg. ENIAC). Nowadays computers are instead viewed as something akin to post office boxes that serve as repositories in which people store information, so that they or other people can access that information subsequently. The machines themselves have no understanding of the information they hold. It is symptomatic of a new shift toward a semantic web and semantic machines that associate meanings with the information they hold about the world by constraining possible interpretations through formal logics.
The second mind set change is an acceptance of cognitive machines. Circa 2006, computers are viewed as machines that hold information that people reason about. In time computers will come to be understood as machines that have software agents that people reason with.
[edit] Integration
4: Integration between people and machines is managed through mixed initiative strategies and by equipping cognitive machines with storytelling technologies.
The integration tenet addresses the integration of people and machines. It makes two points, one in relation to mixed initiatives and the other in relation to improvements in interaction.
[edit] Mixed initiative
In UC2 systems, the automated and human decision-making is fully integrated. Integration exists to complement the weaknesses in some parts of a UC2 system with strengths in other parts of a UC2 system. This includes the division of labor between people and machines.
James Reason, who has undertaken extensive research on human expertise and error, captures the intent beautifully from the human perspective, through contrasting "the human as hazard" with "the human as hero". People can exhibit great flexibility, adaptation, recovery and improvisation to perform heroic acts. Apollo 13 and Chess grand masters, who can play blindfold chess simultaneously against over forty players, are examples of the incredible capability of humans. But humans also make errors, commonplace errors of little consequence and uncommon errors with serious consequence. The shooting down of the Iranian passenger aircraft Iran Air Flight 655 by the US Navy in 1987, and the Space Shuttle Challenger disaster, are examples of the human as hazard. And most importantly, the heroes and hazards are not two different groups of people. The heroes are sometimes hazards.
The division of labor between people and machines should be developed to leave human decision making unfettered by machine interference when it is likely to prove heroic, and enhance human decision making with automated decision aids, or possibly override it with automated decision making, when it is likely to be hazardous. Overriding human decision-making may seem a highly contentious suggestion. However, motor vehicle drivers who must contend with traffic lights are required to obey machine orders - with good reason. For similar reasons of safety, in cases where a machine detects a violation of rules of engagement, then it should be able to at least question the order before compliance. The appropriate balance between the exercise of intent by people and machines is something best determined empirically, rather than on the basis of apriori belief.
When automated components substitute functionality that is currently provided by people in hierarchic structures, including social coordination functionality, then those automated agents must accept the authority, responsibility and competency associated with that functionality. For automated agents, this should be ordered by competency, then responsibility, and then authority.
- An automated agent’s competency will depend on the expertise embedded within it, and the agreements it forms should primarily derive from its competencies.
- An automated agent’s responsibility will follow from the social agreements it forms, given available competencies.
- An automated agent’s authority is not determined by a priori rank, but depends upon the role it assumes in social agreements, given available competencies.
[edit] Improved interaction
The integration tenet also contends that as the machines acquire an ability to reason about their environment, id est comprehend and project, they will also require a means of presenting information to people that goes beyond simple "dots on maps" displays and the desktop metaphor. In essence, the machines need to have a storytelling capability.
In our everyday lives, television news often provides our situation awareness about the world. It does this by assembling presenters, maps, diagrams and video footage to convey stories about daily events of interest. Software virtual advisers, virtual battlespaces, virtual interaction mechanisms and environments, and virtual videos, are respective software counterparts to the presenters, maps, diagrams and video footage featured in news services.
As software, it allows the machine to generate stories from its accessible information. As software it is portable, being easily replicated, adapted and distributed throughout a network. And unlike television news services, as software it is interactive, allowing the user to access the information of interest to them.
[edit] Distributed and decentralised
5: Distributed locations allow seamless virtual integration with the robustness of physical diversity and Decentralised intent provides unity through mission agreements with robustness through a diversity of underlying intent.
UC2 systems advocate diversity by endorsing C2 that is distributed and decentralised.
[edit] Distributed UC2
Distributed UC2 postulates that C2 should be distributed across location.
Distributed UC2 affords location independent access (unity) while the physical distribution of information (diversity) offers protection from spatio-temporally constrained strike capabilities like missiles.
[edit] Decentralised UC2
Decentralised UC2 postulates that C2 should support the decentralisation of intent. Each member of the collective should have the capacity to ask (pull awareness), tell (push awareness), command (push intent) and obey (accept intent). The decentralisation of intent therefore allows for agreements about intent as well as awareness. Decentralised UC2 affords protection from strike capabilities that target centralised will (the origin and ownership of intent) like assassination and blackmail. UC2 combines distributed UC2 with decentralised UC2. It accommodates a diversity of intent situated at a diversity of locations.
In the synthesis, decentralised UC2 gives rise to what one might term mission agreement. Mission agreement allows for agreements that are not restricted to a hierarchical top down cascading of intent. Consequently, mission agreement supersedes Mission Command because it allows for intent network structures of which intent hierarchical structures are but one type. In thesis terms, intent can be introduced at "the edge" of an organisation and propagate inwards if it garners sufficient support. This introduces a "command fusion" (intention) issue to complement the "information fusion" (awareness) issue already present under Mission Command.
The generalisation of hierarchies to networks allows for the use of hierarchies when they are appropriate, and non-hierarchical networks when they are inappropriate.
[edit] Social coordination
6: Social Coordination among people and machines in a collective can be flexibly achieved through automated social agreement protocols and social policies.
In general a UC2 system will have a demand pool of human and machine agents offering intent, and a supply pool of human and machine agents offering capability. Moreover, the two pools will generally overlap, as any member of the collective can be a member of either pool. The challenge is to manage this level of flexibility without anarchy.
UC2 systems can achieve social coordination by instituting social agreement protocols that coordinate collectives composed of both people and machines. The social coordination can be instituted through software, id est, as more sophisticated variants of existing workflow systems. In essence, eBay is a social agreement protocol implemented through software. The cost of finding information and expertise in this system is low and the agreement and monitoring mechanisms provide feedback for self-regulation.
Social protocols, facilitate adaptive cooperative alliances of the sort canvassed earlier, through the formation of contractual agreements between members of the collective. They can also generate adaptive competitive factions, as members of the collective compete for capability resource to satisfy their intent. In their primitive form, such protocols admit a laissez-faire management style.
The development of Legal Agreement Protocols based on commercial contract law, with framework extension to cope with legality of duress for military use are advocated.
[edit] Management levels
7: Management levels naturally arise from commonalities of location and intent.
The UC2 framework identifies at least four management levels, characterised by diminishing proximity and increasingly flexible options for social coordination.
The four levels of management identify natural and social constraints that will necessarily be imposed on what might otherwise be the laissez-faire management style alluded to earlier.
- Individuals are the smallest unit of management. Whether human or machine, the individual practises self-management by relying on cognitive capabilities.
- Platforms provide the second unit of management. Despite advances in virtual presence, some individuals will be collocated on platforms that must be socially coordinated.
- Teams constitute the third unit of management. Teams are formed on the basis of a commonality of intent, rather than a commonality of location and intent. This allows for a more flexible approach to social coordination.
- Societies form the fourth unit of management. Societies form on the basis of interaction, be it physical or virtual. Societies accommodate the mix of collaborative and competitive ingredients.
A UC2 system is perhaps best understood as a society of societies. The social agreement protocols and constraints have to contend with both dynamic intra and inter social group consequences. Individuals will generally belong to multiple social groups concurrently. Societies are dynamic, often with membership changes according to the mission.
The design of UC2 systems may be based on the notion of preferential and critical requirements. Preferential requirements may be achieved by coding the appropriate strategies into agent designs. The satisfaction of critical requirements, that specify behavioral boundary conditions of the UC2 system, typically by citing failsafe conditions, requires verification. If verification is necessary for the deployment of a specific UC2 mission configuration, then a formal verification proof of system design would be required. The UC2 approach to design is to define adaptable capability that can be adaptively combined, while ensuring that certain boundary conditions must be met.
[edit] See also
[edit] References
- Lambert, D.A, and Scholz, J.B. (2005), A Dialectic for Network Centric Warfare, Proceedings of the International Command and Control Research and Technology Symposium (ICCRTS). See: Paper [1] and Presentation [2]
- Lambert, D.A. (1999a), Ubiquitous Command and Control, Proceedings of the 1999 Information, Decision and Control Conference, Adelaide, Australia, pp. 35–40, IEEE. See: [3]