The Application of Virtual Reality Technologies to Existing Battlefield Simulations

by, Roger D. Smith

Military officers in combat situations find that their perceptions of enemy troop strengths, locations, and movements are of primary importance. Lack of this information prevents them from successful implementation of the most useful strategies available. Unfortunately, even information that is available can not always be arranged and fused into useful intelligence. To overcome this difficulty armies train in simulated combat.

It has been common in the last twenty years for training to occur within large computer driven simulations. Entire laboratories have been established where computers reproduce every type of combat imaginable and respond to user input by calculating battle outcomes, sensor performances, and troop movements. Combat results are communicated to participants and controllers in the form of printouts, tabular CRT displays, and graphics terminals. Commanders are able to determine the effects of strategies and tactics that will be used in confrontations like Desert Storm. The history of these encounters is stored for later review, allowing players to see the battle from the "White" or "God's Eye" view. This enables them to understand why certain tactics succeeded or failed and reveals where others could have been implemented if commanders had been aware of the actual situation.

During a simulation exercise and the debriefings that follow so much complex data is presented that it is nearly impossible for the human mind and senses to translate it all into a meaningful picture of the events. The advent of computer graphics for data display greatly enhanced this process and made it possible to create more complex simulations while still conferring the information to the players in an intelligible format. But, we are again pushing the limits of absorption and comprehension. Department of Defense simulations such as the Corp Battle Simulation (CBS), Air Warfare Simulation (AWSIM), Ground Warfare Simulation (GRWSIM), and Tactical Simulation (TACSIM) produce such a large amount of data and present so many lessons to be learned that users can not grasp all of it. Much of the information generated is being lost because of our own human input limitations.

Combat simulations are separate worlds from which we are attempting to gather information. As software and hardware improve these worlds will become even more minutely defined and thus contain a larger amount of information to be processed. These worlds need specialized sensors within them to collect the information just as our five senses collect information in the real world. Were we to eliminate one of our human senses no one would argue that we would not be getting a complete picture of the real world. But, in the alternate, simulated world we began with such limited senses that it is hard to imagine how better ones would open new windows into that world.

The technology that made graphics output practical for simulations was a revolution. Though there were some who argued that the picture could just as well be built by hand from tabular data, they have been silenced by the success of graphics interfaces. Today, VR is making another revolutionary interface possible.

VR and battlefield simulation are a natural combination, one is an existing virtual world which is in need of senses to explore it and the other is a sense in search of a world to explore. We can merge the two and discover what has been going on inside of simulations, even though we have not previously had the capability to perceive it. The military has been creating these alternate worlds for years but has not been able to take from them all that is available.

A test-bed can be built which captures information using virtual reality senses and delivers it to a military exercise player in real time. These new eyes, ears, and hands can be plugged into a world where information is just waiting to be seen and heard. For years we have been using the growing computer power coming out of Silicon Valley to increase the fidelity and breadth of our simulations, now we can use it to extract new information from the alternate realities that already exist. It will fully involve the player's senses, mind, attention, cognition, and imagination in training exercises as we never have before.

In the beginning, we could create two virtual reality interfaces with the simulation. The first, for the Blue Force commander who is making decisions on how to execute the war. With this tool he will be able to "fly" over the battlefield viewing his force's perception of the enemy units and their formations. He will be able to see what his forces look like from the enemy's point of view. By standing in their boots he may better understand their plan for him in the immediate future. Being so closely in touch with graphical data will give commanders a better understanding of the data they have been looking at from the outside for years.

VR interfaces can clearly expose holes in the intelligence collection capabilities of both sides. Though it may appear that there are no enemy forces an a given area, it will become clear that this is actually a hole in our sensor collection plan. This can be determined in an after action review by recording each side's perception and comparing it with a "white" view at the end of the exercise or, allow the VR Deck to toggle between blue, red, and white views to more fully bring out the differences. This capability may be given to the umpires to make them more than just rule enforcers. VR Decks will create a role of "Capabilities Analysts", men whose job it is to evaluate the performance of existing assets and compare overlapping and discontinuous functionality. Two sensors may be collecting the same data while other information is not being gathered at all. It will be possible for the Capabilities Analysts to see this and record it for later recommendations.

Not only is visual data easier and faster for the human mind to process, it is geared toward high-level decision making. Commanders are looking for trends and familiar patterns which reveal the actions and intentions of the opponent. Visual data has a kind of fingerprint which our minds retain clearly just as we remember pictures better than text. When these fingerprints appear again it is easier to recall the previous situation and correlate its lessons with the current problem.

Virtual views can also be rewound and run forward at different speeds like a movie. In fast forward, details may blur but the overall plot of the story will become clear, allowing commanders to step back and see the forest rather than the trees. As the details in simulations increase the fascination with these details will be a significant fixation to break.

The second virtual interface is for the Intelligence Analyst. This person is responsible for fusing data reported from many different sources into a single coherent picture. Radar and photographic data from aircraft, electronic emission collectors, and forward reconnaissance teams, all collect information about the enemy forces. This floods back to intelligence units and command centers where it is pieced together like a jigsaw puzzle. If it can not be interpreted the value of the information is lost and never becomes useful intelligence. If the data can be transformed into a consistent pictorial form, the analyst may be able to process a higher volume and extract more intelligence than he can from textual data. In the tactical intelligence business timeliness is everything, knowing too late is almost as bad as not knowing at all.

Cataloged archives of fusion decisions can be used to verify the analysts decisions and give him an opportunity to explain his reasoning to new, inexperienced personnel. Events that were previously difficult to recall and relate clearly can now be played back as easily as a movie. Just as football players learn from watching game films, analysts will learn from watching the computer exercise films. The expertise of particularly masterful analysts can be captured and sent to units all over the world for educational classes.

As we have discussed these two functions, it has become clear that the value of the VR Decks in computer simulations can be extended directly into real world combat. As more data travels via computers it will be easier to capture it and put it into a VR Deck at a command post in the Iraqi desert or aboard a naval ship. Even though we test and develop these tools in a simulation environment we will be creating real world Battle Management Systems in the process.

Some real world systems under development right now, such as the All Source Analysis System (ASAS), are driven by simulations for test purposes. The output of the TACSIM model is designed to electronically stimulate the ASAS as if it were in actual combat. If a VR Deck were similarly developed with TACSIM as the stimulator, it would then be ready for direct application into the real world.

TACSIM currently operates in the simulation world as the intelligence collector for other, strictly combat simulations. As such, it is replicating the performance and output of the systems which would be reporting to ASAS. It operates with CBS, GRWSIM, and AWSIM in exercises all over the world. These exercises provide an opportunity for testing VR Decks in all theaters of combat and on a very frequent basis. The Deck could be developed in a laboratory and tested with simulations before being taken into the real world. Since confrontations like Desert Storm can not be planned regularly, this is an obvious alternative which we can exploit.

Though a VR Deck is a very unique piece of equipment, the methods of interfacing with it will be similar to interfacing a pair of simulations. In typical command training exercises the players are usually interacting with more than one simulation, though this may not be obvious to them. Simulations of sensors and air operations are usually run as separate processes on separate computers from the simulation of ground combat. The interface that allows this to occur is very similar to that needed for an interface with a VR Deck. The Deck will need to know the location, composition, and actions of friendly forces as well as the same perceived information about enemy units.

To acquire this information it must first be extracted from the simulation's database. The best way to do this is to gather all friendly and perceived enemy information in a single initialization package before the simulation begins. This will create identical starting pictures in the simulation and the VR Deck. The packet should also include environmental information such as weather and terrain. As the combat, air, or sensor simulations progress through the war the scenario changes. These changes must be captured and transferred to the VR Deck. It will need to know about changes in the location, layout, and strength of the friendly units.

Just as the combat simulations continuously transferring information, the VR Deck must be continuously receiving information. When the initialization packet is received the VR Deck will set up its database by creating the objects, deployments, and environment that are in the simulation. Once this is done it will wait for update packets coming from the simulation. These will be used to identify the units that have changed and to update their location, layout, or composition.

Not all simulations have terrain data to supply to the VR Deck and those that do would require a significant amount of time to transfer the information through an interface, therefore, terrain data must reside in the VR machine and be updated as it is cratered by explosions. Next, we will need a three dimensional image library and image generation process. These images will be used to represent the objects being reported from the combat simulation. A standard set of objects such as tanks, jeeps, trucks, artillery pieces, fighter aircraft, and bomber aircraft must be available. But we may also choose to represent forces with abstract icons for entire divisions, battalions, and companies. It is desirable that the players be able to create new images by manipulating those already in the database. This makes it possible to fill in any discrepancies and create a greater level of graphic detail. The standard tank symbol may be an American M-1 which could then be customized to represent a Soviet T-80. Since increased detail results in increased processing time, objects should be built at different resolutions. High resolution images for still pictures such as marketing photos. Medium resolution to be used when panning slowly across an area. And low resolution to be used when moving swiftly across the battlefield or searching for areas of significant interest. Since we will also be interfacing with a sensor simulation, we must realize that objects are detected at different levels of resolution. These can be used in reporting entities that are not totally identifiable.

When the combat simulation begins, special functions will be necessary. Obvious commands such as movement in three dimensions, zoom, scan, and declutter will be necessary. The deck also needs to be able to search for particular types of information such as specific locations, equipment types, unit names, or distribution patterns. A function shifting all symbols to lower resolution levels as described above should be available.

The Deck must make backups of its database periodically. This provides a secure point to fall back to should the current database become corrupted. If the update packets are also saved it will be possible to fall back to a saved database and then play the packets up to any point between then and the current time. This would allow resynchronization with the simulation should serious problems occur and would also enable the commander to back up in time and replay interesting data over and over. He could run through a segment of time in fast forward to see the major trends during that period. To take advantage of this the VR Deck must have the ability to switch from monitoring the current situation to reviewing history. During these reviews the machine must continue to track and record current developments. A commander could then enter the VR Deck once every two or three hours and review all of the information that has been processed. He need not remain in it constantly to profit from all that it observes.

The interface that is described here shows similarities with some of those that exist. CBS and TACSIM are joined in a similar fashion. The data that is passed is of the same type and certain information must be built in each model to allow it to work with the other. An interface with a VR Deck must be standardized, allowing connection to CBS, TACSIM, or AWSIM. This will enable the military training community to test and utilize it in many different environments.

Undoubtedly, the most valuable uses for virtual reality technology and its military application are not predictable. As hard as we try, it is not possible to consider all of the avenues available to a new concept. Only by developing it and putting it in the hands of users can it really be explored. The simulation world is the perfect place to develop and test such a tool and with the computer intensive systems now in the battlefield it is a very short step from the simulated world to the real world.

Author: Roger D. Smith is a Principal Simulation Engineer with Mystech Associates developing and maintaining simulations for military exercises around the world.