THE CONFLICT BETWEEN HETEROGENEOUS SIMULATIONS AND INTEROPERABILITY

Roger D. Smith
Mystech Associates
Manassas, Virginia

ABSTRACT

The Distributed Interactive Simulation (DIS) Protocols are the best attempt at providing simulation connectivity available today. These allow heterogeneous simulations to operate together based on a common understanding of a few message types and their contents. Integrated DIS simulations have typically been single-object, simple-engagement simulations. Tank simulators engage in combat activities which involve seeing and shooting enemy ground objects. Helicopter simulators see and shoot other airborne and ground-based objects. As long as object interactions remain this simple, heterogeneous simulations will operate together harmoniously.

The Aggregate Level Simulation Protocol (ALSP) is an ARPA generated project designed to join constructive simulations in much the same way that DIS does virtual simulations. Simulated entities engage in a wide variety of activities, including seeing, sensing, moving, shooting, jamming, communicating, and reorganizing themselves. Creating an interface protocol to accommodate all of these events in a consistent manner has proven very difficult, and has not been completely accomplished. The original design and modeling frameworks within each existing simulation often make it impossible to share an event between the simulations and still calculate a fair and consistent outcome.

This paper will explore some of the difficulties involved in integrating a very diverse and complicated set of simulation events. Many of the problems encountered in the constructive world over the past 15 years foreshadow those that will be uncovered in the virtual world. The first step in achieving Interoperability is to allow communications, a challenge the DIS protocols are addressing well. But, once this is accomplished the dissimilarities of the integrated simulations will become apparent, and obtrusive. The paper uses analogies and actual interoperability examples to illustrate these problems. It then proposes the need for a common modeling framework and transformation algorithms which must be shared by the simulations that are to become interoperable.

ABOUT THE AUTHOR

Roger D. Smith is Principal Simulation Engineer with Mystech Associates. He is responsible for developing simulations and tools to support the training missions of US and Allied forces around the world. These have included air and ground combat models, intelligence collection and analysis algorithms, after action review systems, and simulation management tools.

INTRODUCTION

The ability to use simulations to realistically train military personnel is strongly dependent upon the ability of the simulation to interoperate with other simulations. Both the DIS and ALSP programs are attempting to join multiple simulations into a single integrated environment. This combines the strengths of many simulators, taking advantage of expertise in different areas. The first step in creating interoperability is communication, which is the primary mission of DIS. The next step is to consider the capabilities of the distributed systems we will have created once these communications protocols are operational. This paper will explore some of the possible discontinuities that will arise in spite of a perfect communications protocol.

The virtual community has typically created synthetic environments of very high fidelity, but which span a limited set of combat events. The basic model for these has been the SIMNET project in which multiple objects of the same or similar types exist on a small piece of terrain. These objects are allowed to move about and shoot each other with simple projectile weapons. Basic interactions of this type are not difficult to synchronize using communications protocols.

The constructive community, on the other hand, has sacrificed object and event detail for breadth of operations. Units in this world move, shoot, communicate, jam, detect, report, and perform a host of other activities. Synchronizing such diverse operations is more difficult than what is done in the virtual community. The difficulties encountered in creating constructive interoperability may be indicative of what will be encountered by the virtual community as larger sets of heterogeneous simulations are joined into a single combat environment.

Valid Model Designs

When we undertake to design and build a valid simulation model for a particular process we need to consider the mission and application it will be used for. Whether the model is a good design depends upon the standards by which it is measured. If it uses compatible protocols, it is valid in communicating what is being done. If it uses compatible algorithms it is valid in determining how operations are being done. If it uses compatible design requirements it is valid in illustrating why operations are being done.

For simple events, like firing a projectile, the valid range of variation of a process is relatively narrow. But, as more complex events are modeled the valid range of variation becomes much wider (figure 1). This is due to limited understanding of complex processes, the limits of modeling fidelity, and the decreasing amount of determinism in a process. Under these circumstances we can envision systems which are valid when running alone but invalid when joined with other simulations. As the complexity of a distributed system increases, the possibility of this type of invalidity also increases.

Discontinuity in Spite of Validation

To motivate interest and demonstrate hidden pitfalls, we will take a moment to describe an actual situation. The Corps Battle Simulation (CBS) and the Tactical Simulation System (TACSIM) have two active sets of communication protocols joining their operations, neither of which can overcome the lack of a shared modeling framework. CBS represents units at abstract levels such as battalions and brigades. These units exist as a list of equipment, capabilities, and status variables. This fugue is assigned a single location on the battlefield and moves about over aggregated terrain. It engages in combat with other large units, both experiencing and inflicting damage. This damage is represented as a reduction in combat strength, so that only 50% of a unit's equipment may be considered operational.

TACSIM is an intelligence collection and dissemination simulation. It creates intelligence reports which are delivered to analysts to determine the posture and intention of the enemy forces. These analysts rely heavily upon the deployment of individual pieces of equipment to determine the activity and identity of an enemy unit. To meet this need TACSIM accepts unit information from CBS and adds a "deployment pattern" which assigns a unique location to every piece of equipment based on the unit's type, size, activity, and operational characteristics. This deployment pattern is then reflected in the intelligence reports generated by the sensors collecting information on the battlefield.

Though the deployment pattern is doctrinally correct, it is not known to CBS. When the intelligence analysts and staffs use the reports for the identification, location, and intention of the enemy units, there is no noticeable discrepancy. The reported location is not necessarily the exact location of the CBS unit, but the general vicinity is correct. However, when the TACSIM reports are used to provide targetable locations to artillery units in CBS there is a disconnect. The individual object locations provided by TACSIM do not correspond exactly to CBS unit/object locations. Therefore, when the CBS artillery is fired at a reported location, the result may be a direct hit or a complete miss, depending upon the deployment of the TACSIM object that was targeted. This problem is caused by the use of different simulation models in the two systems. The communications protocols between the two is not flawed, rather it is the modeling constructs and the missions of the simulations that is creating the problem.

As virtual simulations expand their capabilities, discrepancies of these types will be experienced as well. The only way to overcome these is to establish a common modeling framework within which heterogeneous simulations can operate.

BATTLEFIELD METAPHYSICS

The military simulations we deal with, basically consist of objects which participate in events and experience the effects of those events (figure 2). It is currently popular to divide events and effects into cognitive and physical categories, where movement across terrain is physical and the decision of the route to use is cognitive.

Objects should be described using a class hierarchy, a standard object-oriented design problem. What is needed is a class structure that illustrates the breakdown and construction of all significant battlefield entities (figure 3). DIS has focused on a small part of this object structure, and since the entire structure was not considered during the PDU design phase there may be some difficulty in expanding the DIS standards to serve objects that are less similar to armored vehicles (soldiers, civilians, buildings, electronics, etc.).

Events are the types of information typically found in exercise generated PDU's. This may be movement, weapon firing, weapon impact, and message transmission. The list will grow considerably as the variety of distributed simulations increases.

Effects are characterized by the models embedded in each simulation. The determination of these effects is invisible to the distributed members of the exercise. Only the calculated outcomes are shared with others via PDU's. Events and effects should be considered together, since an event divorced from effects is essentially null. The invisibility of the calculations of effects generated by events is a key part of the consistency problem that will be faced when joining simulations.

HETEROGENEOUS ENGAGEMENTS

Interactions between military units in combat are very complex, which is one reason simulation has been used in the analysis of warfare. As a result, the combat example to be provided in this paper will be scaled down to effectively illustrate the problems that can occur when joining heterogeneous simulations. We will first draw several analogies to more common, better understood systems in order to illustrate the situation.

The analogies allow us to transform the very complex events of combat into less complex events in the game of chess and the experience of driving on a highway. They also describe an environment in which we have more experience and can conceptualize accurate examples. Since few system/software designers have participated in combat this technique is valuable for bringing problems into a more familiar domain. Some of the characteristics of combat which we are interested in, and which we hope to preserve in the analogies, are cooperation, tactical planning, cognition, and reactivity.

Chess Analogy

The first analogy is from the game of chess. Imagine the application of the DIS protocols to a distributed game where every piece is controlled by a separate simulation and computer. Information of the type found in the DIS PDU's is exchanged to enable the pieces to move legally and avoid being captured. This analogy will not be extended to the cognitive processes that must be used in planning strategies, but will be limited to basic first order effects of actions.

In the DIS paradigm each piece knows the location and identity of all of the pieces on the board. From this information, each is able to calculate an acceptable next move. The PDU's that would be generated in this game are:

  1. Entity State - Informing the network of the change in location from square A to square B, or off the board.
  2. Fire - Announcing the arrival of the piece at a square that is occupied by an opposing piece.
  3. Detonation - Announcing the capture of an opposing piece.
  4. Signal - Announcing "Check" or "Checkmate" as appropriate.

The internal modeling of the operations of each chess piece are not known by others on the network. DIS allows these to interoperate based on their ability to generate the appropriate PDU's. The rules used to generate moves and avoid capture are not available to other players. Assume that the movement rules for one Bishop are coded as:

  1. Move Diagonally in any direction,
  2. Move as many squares as the board allows and the player desires,
  3. Do not move through a square occupied by another piece, and
  4. Capture opponents occupying the final location following a move.
  5. Do not occupy a square that is subject to capture by an opposing piece.

The Bishop simulation includes the rules for the operations of the rest of the pieces on the board. In chess these rules are very clearly defined and categorized. But, assume that the programmers are not intimately familiar with these rules. One simulation builder limits the movement of a Knight such that the turn point in its "L" shaped move must be unoccupied. Another designer omits this limitation and allows the move. One designer allows the King to capture other pieces, another does not. One allows Pawns to capture other Pawns "En Passant", but another does not.

Even though all simulations generate PDU's appropriately, when they play together the outcome will not be fair or representative of a pair of humans playing the game. These rule variations would be quickly detected and negotiated by human players, but the distributed simulation will allow the game to continue under these circumstances.

In the case of Chess we may protest that such a situation would not occur because of the well defined nature of all aspects of the game, and because of the ability of simulation managers to see the problem and stop the simulation. This is true of Chess but not true of combat simulation. Warfare is many orders of magnitude more complicated that Chess and the rules are not understood, much less agreed to. Simulation designers use their best understanding to create models, but no two of them are alike. Just as a simple game of chess breaks down because of small differences in encoded rules, simulated combat will breakdown. Interoperability begins with communications protocols, but it does not end there. The official rules of the game are a modeling framework to which all designers must adhere in order to create a distributed simulation that can operate realistically.

Automobile/Highway Analogy

The second analogy is from the highway system on which we operate our automobiles. This will be explored more briefly than the Chess analogy, but significant characteristics will be illustrated. Each automobile on the highway is operated by an autonomous entity - the driver. These share the highway and react to the operations of other automobiles in their vicinity. The communications protocols between the vehicles are:

  1. Entity State - Changes in speed or direction as detected visually by other drivers, turn signals, and brake lights. All of these provide information which other nodes can use to react to the environment.
  2. Collision - Accidents which are visible to surrounding motorists.
  3. Emissions - The use of headlights and horn, and the sound and exhaust emitted by the automobile.
  4. Signal - The transmittal and receipt of messages via car phones and citizen's band radios.

We could create a simulated driving network which adheres to all of these standards. But, each automobile is controlled by an independently developed model which uses different algorithms and rules for operating on the highway. Variations such as those below conspire to create an environment which is not realistic and, therefore, not constructive for training operations. Variations include:

  1. Speed - Automobiles may follow United States or European speed limits, or they may drive according to the maximum capabilities of their vehicles.
  2. Side-of-Road - Automobiles may drive on the right or left side of the road.
  3. Yield - Automobiles may yield the right-of-way at intersections and railroad crossings according to many different rules.
  4. Right-On-Red - This may be allowed or disallowed differently, and may be expected or unexpected differently.
  5. Driving Surface - Motorists may be limited to the paved roads, or may choose to operate on sidewalks, fields, and lawns.
  6. etc.

In the real world the method that is used to standardize these operations goes beyond dictating the design of automobiles and the use of signaling devices. A specific set of rules is set forward in the driver's manual (a modeling framework) and every motorist must learn to perform this adequately in order to use the highways. Though there are violations, these are minimized by the actions of the legal system in removing "dangerous" drivers (those that do not adhere to the modeling framework) from the highways. It is the maintenance of a modeling framework that allows heterogeneous distributed systems to operate.

Armored Unit Example

We will now define three armored battalions, each with slightly different characteristics and capabilities. These represent the modeling structures and assumptions in three different simulations. A conceptual picture of each of these units is presented in figure 4. The units will then engage in activities which illustrate the implications of joining heterogeneous simulations, regardless of the ability of each to generate and use the communications protocol.

Unit A is modeled using a single location and orientation to represent the entire contents of a complex unit. Its equipment is stored in a list and damage is represented as a strength multiplier for the total power of the unit. This is the classic aggregated, or constructive, method used in many staff training simulations today.

Unit E is given a central unit location, but the equipment in its inventory are assigned unique locations as offsets around the central location. This unit consumes fuel as it moves in the simulated environment. This entity model is an enhancement of the aggregate model above.

Unit V is assigned a central location based on the location of an object designated as the unit command vehicle. All of the objects are assigned individual locations and may move independently of one another. This unit consumes fuel just as Unit E, but in addition it experiences a degradation of morale and fatigue as a result of combat. It also receives all of its commands and intelligence via communications structures included in the model. This is an enhanced version of today's virtual training simulators.

Obviously, the units were described in increasing levels of fidelity. This fidelity creates a corresponding increase in vulnerability and a decrease in brute threat power. The models with increased fidelity are also the models most greatly effected by the intelligent behavior of the humans or cognitive algorithms operating them.

Imagine a combat scenario in which sets of units from all three models join in the action. Unit A is the most basic, and as a result the most dangerous. When A and E get into a running battle A fights to the death, experiencing only combat damage. E, on the other hand, must also operate under the constraints of burning fuel. Should it get the upper hand and begin to chase a retreating A, E would eventually consume all of its fuel and stop, while A could run forever.

Damage that is being calculated on Unit A is being shared among all of the objects in the unit. It does not identify the unique pieces of equipment which are within engagement range of Unit E. This effectively makes it more powerful than E. Since Unit E stores the locations of every piece of equipment it will determine which of its objects can engage the enemy. This means that E can not use its full combat power against A, because some equipment is located away from Unit A and can not bring its weapons to bear in the engagement. Damage inflicted by Unit A is assessed against a fewer number of pieces of equipment in E. This means that this equipment will be more severely effected by combat.

Combat between Unit A and Unit V will be identical to A-E from A's point of view. Unit V, however, will experience morale degradation and fatigue as a result of combat. These factors will make it less effective over time in addition to its casualties, while Unit A will fight as fiercely from day 1 to day N of a war. Unit V may also expend time and energy maneuvering its individual objects into advantageous positions, tactics which are very useful when facing another unit of type V. But, when facing A, all of this work has no effect on the engagement. At best it may be useful in a few defensive calculations, since Unit V's computer will be allowed to determine the damage A is inflicting on V.

Unit V may also engage in electronic combat, attacking the communications links and radar sensors of enemy units in the area. Against Unit A or E this strategy will be totally ineffective. But against other V Units it may be very effective. Therefore, if Unit V is faced with opposition from units in all three models, electronic combat will weaken points in the line occupied by units of type V, while the areas covered by A and E will remain strong.

The examples given above illustrate some of the problems that arise when integrating models of different levels of fidelity using different modeling techniques. These illustrate only the very basic characteristics of combat units. When more fidelity (depth) or more capability (breadth) is added the effects of these types of discrepancies are magnified. Some of these additions may be the use of chemical warfare, nuclear warfare, dynamic combat groups, logistics, intelligence content and timing, communications saturation, road/route saturation, and extreme weather. Discrepancies in either the virtual or constructive modeling domains effect the training scenario and skew the value of the exercise for the participants. This is why a modeling framework for all interoperable simulations is necessary. Certain essential capabilities must be defined for all models to allow them to operate together fairly.

MODELING FRAMEWORK

A modeling framework must describe the composition of units and objects in the scenario, what they are capable of, and what they are effected by. The DIS community touched upon the tip of this iceberg when they decided that the shooter would calculate the impact point of a weapon, but the target would calculate the damage as a result of the impact. This technique is part of a larger modeling framework which needs to be created for a more diverse set of simulation interoperations. The DIS dead reckoning algorithm technique is also a step toward a library of transformation algorithms for model mediation. The framework will include transformation algorithms which perform aggregation/disaggregation, high- and low-fidelity movement, morale and fatigue effects, etc.

A modeling framework will address what events must be represented by a simulation, and what effects must be considered in these events. For example, movement may be effected by terrain, weather, engineering effects, navigation, equipment failure, and actual movement from point A to point B. The event will then cause environmental effects, activity, emissions, and other events. There are a wide variety of valid methods of implementing these, but the standard will be described in the framework.

A sample framework for combat engagement is included in figure 5. This illustrates some of the factors that may be taken into account when performing different types of combat, and the resulting trickle-down effects.

Each model has a domain space, S, roughly defined by the objects, O, events, E, and effects ,e, which it can operate on. The architecture can then be expressed as:

But, since effects are determined by an interaction of objects and events, this may be written as: where,

The valid domain space of a distributed simulation must be determined from the domain spaces of each model included in the confederation. Two simple ways to determine the distributed space are: 1) accept the union of individual spaces:

or 2) the intersection of the spaces,

If the domain space is described as a matrix, the union method creates a distributed matrix which is very sparsely populated. It implies the existence of events and objects for which no effects are defined. There is also no relationship between the effects generated by similar objects and events across the different models, as described in section 3 above. This is not at all what we had hoped for in creating a distributed simulation. In effect, g(O,E) is not continuous, but rather unpredictably discrete as O and E vary.

On the other hand, the intersection method creates a very tightly defined matrix. It also limits the simulations to activities which are totally defined for all members of the simulation confederation, and may result in the empty set as the heterogeneity of the simulations increases.

Obviously, a more complex method of determining the domain space is desired. One in which effects are defined for all object-event pairs which will occur in the distributed simulation. Exploring this further is beyond the scope of this paper, but we contend that the most efficient method for creating this ideal distributed domain space is to design models according to a modeling framework. This will provide a degree of consistency from the design phase through distributed activation.

CONCLUSION

The preliminary definition of the High Level Architecture recognizes that interoperability must begin in the design phase of the simulation. This requires a knowledge of the simulation standards and of other developments within the community. HLA espouses interoperability to a defined and limited degree. In this paper we argue that complete interoperability requires the definition of a comprehensive framework which can indicate the kinds of relationships that must exist between all different simulations, a roadmap in which it is possible to get from any one point to any other through the framework.

The HLA group assumes that any defining framework must take the form of an object model. Under this assumption, they argue that it is an unrealizable ideal to create a universal object model which connects all interoperable simulations. We agree that this is true of the object modeling approach, but feel that a more general framework is possible, and necessary, to realize long-term goals in interoperability. The specificity of an object model would not be included in this framework since it is intended to serve as a higher level guideline. It would impinge more upon the general modeling concepts, and less upon the object definitions and message protocols which the preliminary definition of the HLA addresses.

BIBLIOGRAPHY

Aggregate Level Simulation Protocol Operational Specification. 1993. MITRE Informal Report.

Corps Battle Simulation: Analyst's Guide - Air/Ground/Logistics. 1993. Jet Propulsion Laboratory. Pasadena, California. (3 Volumes)

Corps Battle Simulation: CBS-AWSIM/CBS-CSSTSS/CBS-TACSIM Interface Control Document. 1993. Jet Propulsion Laboratory. Pasadena, California. (3 Volumes)

Defense Modeling and Simulation Office. 1993. DMSO Survey of Semi-Automated Forces.

Defense Modeling and Simulation Office. 1993. Panel Review of Aggregate Level Linkage Technologies. Aerospace Report Number ATR-92(2796)-1. The Aerospace Corporation. El Segundo, CA.

The DIS Vision: A Map to the Future of Distributed Simulation. 1993. Institute for Simulation and Training. Orlando, Florida.

Distributed Interactive Simulation Master Plan. 1994. Department of the Army.

Distributed Interactive Simulation: Operational Concept 2.3. 1993. UCF Institute for Simulation and Training. Orlando, Florida.

Fishwick, P. A. 1994. Simulation Model Design and Execution: Building Digital Worlds. Prentice Hall. New York, NY.

Knepell, P. L. & Arangno, D. C. 1993. Simulation Validation: A Confidence Assessment Methodology, IEEE Computer Society Press. Los Alamitos, CA.

Modeling and Simulation Master Plan. 1994. Department of Defense.

Pace, D. K. 1995. Issues in Validating Simulations with Heterogeneous Levels of Fidelity. Presentation at High Fidelity Modeling and Simulation in the DIS Environment Workshop. Laurel, Maryland.

Popken, D. A. 1994. Hierarchical Modeling and Process Aggregation in Object-Oriented Simulation. International Journal in Computer Simulation. 4(1).

Reynolds, P. F. Jr. 1994. DISorientation. Proceedings of the 1994 Electronic Conference on Constructive Training Simulation. Falls Church, Virginia.

Smith, R. D. 1995. A Fully Integrated Intelligence Simulation Architecture. 21st Century Intelligence Technology Symposium. Fort Huachuca, Arizona.

Smith, R. D., editor. 1994. Proceedings of the 1994 Electronic Conference on Constructive Training Simulation. Falls Church, VA.

Smith, R. D. 1994. Vertical Integration of Constructive and Virtual Level Simulations. Presentation to University of Virginia, Computer Science Department.

Smith, R. D. 1992. Analytical Computer Simulation of a Complete Battlefield Environment. Simulation. San Diego, CA.

Standards for Distributed Interactive Simulation - Application Protocols, Version 2.0 Fourth Draft. 1994. UCF Institute for Simulation and Training. Orlando, Florida.

Zeigler, B. P. 1990. Object-Oriented Simulation with Hierarchical, Modular Models: Intelligent Agents and Endomorphic Systems. Academic Press. Boston, MA.