Virtual Reality and Augmented Reality training solutions built specifically for military training are beginning to surface in the wild. The benefits of these systems are considerable – the ability to train combat and small squad tactics, peacekeeping and policing scenario training, and of course teaching specific procedures and skills through virtual reality and augmented reality interfaces.
But the majority of these systems share common flaws:
- They are built only with the single user in mind: one headset, one user, one simulation.
- They are typically limited in mobility, due to both cable tethering and the fragility of equipment.
- Virtual reality can only see the real world through single video pass-through. Augmented reality can clearly see the world, but their single-display graphics lack true depth perception.
- They are not generally useable in designated training spaces that military personal use like warehouses or tactical scenario buildings.
This severely limits the value of augmented reality or virtual reality as a training platform, because it is too disconnected from the reality of real combat.
To offer value, virtual reality training needs to:
Acknowledge that combat personnel train and fight as a coordinated unit.
Combat professionals fight as part of a tightly coordinated and focused team. Their ultimate objective is to have such an intrinsic understanding of their team mates, that they can unconsciously predict their reactions in a combat situation and thus the team can respond as a single unit. This is the basis of how a team trains and drills relentlessly so that they can achieve a high performance flow state.
So for virtual reality training to be really useful, it needs to handle an entire team of users at the same time, and it can’t be physically tethered to a station. There are some military systems that do this (as well as some commercial VR Games for consumers). One such system is pictured here. However the majority of them share a common limitation – the participants are restricted to staying on the spot. This is usually because the VR headsets either don’t have any pass-through cameras (preventing users from seeing each other or physical obstacles), or they do have pass-through but it’s a single camera, which ruins depth perception, making shooting and navigation difficult.
Augmented reality solutions don’t have an issue with pass-through but because they use a single display, any virtual elements in the simulation lack precise depth perception, limiting their value in scenarios where realistic weapon accuracy is a factor. What we need is a hybrid solution that combines the best elements of both – world awareness with true 3D depth perception.
Furthermore, it is necessary to track multiple users in a real physical space, and correctly capture a participant’s actions in enough detail to reproduce their actions within the simulation. We need to track every step they take, every shot fired, and every other move they make. Every compromise and disconnect is another break from reality, making the platform less useful.
Acknowledge that virtual reality combat scenarios must be physically rigorous, mobile and conducted in physical spaces.
Standing idle and firing a toy gun at imaginary virtual targets is simply not an adequate representation of the realities of modern combat. A trainee must be able to open doors, vault over obstacles, climb stairs and take cover. They must not only have a weapon that handles like the real thing, they must have physical feedback from their actions – guns must kick, impacts on their person must register with “force feedback” technology. Without it, vital physical clues are missing from the simulation, creating yet another disconnect that limits its value as a training tool.
To support this kind of physical activity, not only must be the equipment be portable (with a backpack PC handling the task of rendering the virtual world seen through the virtual reality headset) but it needs to be tougher than most off-the-shelf products would reasonably expect to be. It is quite possible to assemble a platform from commercial off-the-shelf components, but the whole setup must be ruggedized and properly rigged to have no loose cords or shifting parts that could impede a soldier’s movements.
The VR training solution must not only be mobile, but it must be calibrated to work in a real physical training space, so that the simulation understands both the layout of the floor plan, and can track the actions and locations of participants in real time. This represents another challenge because to coordinate a simulation with sufficient realism, we need to track the full range of motion of each participant – every step they take, every wave of the arm, all the way down to hand signs and individual fingers. Traditional magnetic motion capture suits can potentially do this, if they can be properly integrated.
This mapping of physical space ensures that the ‘virtual enemies’ within a scenario behave realistically – they need to take cover, climb stairs, avoid obstacles and even stumble over obstructions. Ideally, they should even respond to changes in the environment – kicked over tables, opened doors – as well as follow complex ‘plays’ assigned to them by the instructor.
Acknowledge that for virtual reality training to be more than just a novelty, it must capture training data at the highest level possible.
The most important user of this system is not the participants playing the simulation, but the instructor who controls it. We need to give instructors a control panel that puts all elements of the scenario under their control, allowing them to place enemy combatants and assign them specific instructions. A dashboard will then feed them back information during the exercise.
Capturing motion data and location data from each participant is a technically challenging problem. But it ensures that the simulation is as accurate as possible, by capturing a second-by-second record of every action a soldier takes during an action. Every hesitation, every stumble, and every moment they are out of position or covering the wrong angle – all of it is captured by the simulation and recorded. Post-session, the instructor has access to all this information on a simple timeline tool, and they can break an action down play by play to demonstrate to the team what went right and what they could have done better. This allows participants to see their own actions in the context of their team-mates and even directly from their perspective.
Every round, user data and video is then logged in the action library. This is an invaluable tool for instructors, providing a library of actions that can be shown to newer learners. You could, for example, run through a scenario with your most senior and capable soldiers, and then allow junior recruits to see those actions from the perspective of the participants, from a “bird’s eye” top down view, and so on. This “instant replay” footage also offers opportunities for de-sensitization, treatment of PTSD, sharing best practice or illustrating common mistakes that most new recruits make early in their training. You could also potentially invite allied tactical units in to demonstrate their own particular methodologies, or compare two unit’s ‘runs’ against each other.
What gets measured, gets managed.
The most significant advance would be to measure not just participant actions, but their reactions through the integration of EMOTIV mobile EEG. The EMOTIV element of the custom VR helmet allows squad leaders to record the vitals and neurological responses of simulation participants to generate performance metrics from each training run and help to work towards extreme performance states where every millisecond counts.
From an approximation of fear and stress response, to timing recognition and response of threats, the EMOTIV system gives us an insight into what a soldier is thinking – and what actions they can take to respond better to stress, react to the unexpected, and better maintain a more effective ‘flow’ state in high-stress situations. The US Military is already investing significant resources into identifying and cultivating peak performance neurological states, and the inclusion of this system would allow Defence to begin to explore similar possibilities with its own training optimization strategies.
Conclusion:
Virtual reality offers significant benefits to Defence training, but a certain degree of scepticism is both appropriate and to be encouraged. We can and should expect to set the bar as high as possible for virtual reality training to ensure that the approach we take is grounded in realism, versatility and accurate performance capture.
By creating a mobile virtual reality platform that is untethered, spatially aware, and connected to a shared simulation, virtual reality can be added as an additional layer over traditional squad-based training scenarios. By capturing data on the every action and reaction of participants, instructors can focus on tweaking both fine details as well as core capabilities.
By assembling a custom solution that is using commercial technology that is available off-the-shelf today, we can give trainers and instructors the tools they need to help teams reach peak performance and improve the effectiveness of their teamwork in high pressure situations.
About the Author:
Andrew Smith works at the Department of Defence. His background includes more than a decade working with the games industry (using the same tools and technology that powers virtual reality), and a range of work as a writer, producer and project manager. Before coming to Defence, Andrew delivered a range of innovative projects for corporate clients.
One thought on “#DEFAUS17 IDEA PITCH – Creating a multi-user virtual reality training system”
Excellent blog, Andrew. I am interested in utilising similar technology for teaching REBOA and similar types of trauma management in combat casualities. I also have submitted a blog for DEF-AUS 17 (#DEFAUS17 IDEA PITCH – REBOA IN THE ADF: IMPLEMENTATION AND TRAINING) – am looking forward to it! Cheers, Abe
Comments are closed.