Winning: Not The Human Pilot

Archives

September 8, 2020: The U.S. Department of Defense recently conducted a competition between existing AI (artificial intelligence) software systems designed to replace a pilot in close range air-to-air combat. The testing was done in a realistic flight simulator. The AI competitors were numerous and included Aurora Flight Sciences, EpiSys Science, Georgia Tech Research Institute, Heron Systems, Lockheed Martin, Perspecta Labs, PhysicsAI, and SoarTech. The AI systems competed against each other and the finalist was Heron Systems. This software then fought five battles with a human pilot and won all five.

The pilots participating and observing noted that AI, as advertised, modified its tactics during each engagement. The outcome was no surprise to pilots, most of whom grew up playing commercial air-combat simulators and noticed that the AI opponents in these commercial kept getting better and better. It was also noted that some of the latest commercial air combat software can also handle BVR (Beyond Visual Range) combat in which a long-range radar guided missile is used. These missiles depend on decision making software for their final approach to a target. A future competition with AI software able to handle BVR combat as well as strategy (sorting out the big picture) as well as tactics (which the recent tests confirmed the AI was superior at), would be a better measure of whether software controlled aerial combat was in danger of losing its human pilots. Current air-to-air combat doctrine considers BVR operations the most effective. Close in “dog-fight” combat is to be avoided and is considered largely obsolete. Software developers see no problem with creating BVR combat software, if only because some of it has already been written and put to use.

The current plan is to have AI piloted aircraft accompanying one or more with a human pilot in overall command. It has already been pointed out that even with this, AI pilots would be superior in strategy as well as tactics. If one side turned the AI piloted aircraft loose, they would probably beat a force of aircraft with one or more human pilots making the strategic decisions. The navy, and especially the air force are reluctant to accept this replacement of human pilots but as the recent close range combat tests demonstrated, the AI is faster and more effective than any human pilot.

Then there is the public relations factor. For a long time, many civilians fretted over the use of fully “robotic” aircraft. That was not much of problem when such beasts did not yet exist yet. But now robotic pilots have proven themselves and the future is here. If China rolls out such a system, which they are already working on, what would other air forces do? This does not diminish the public fears of robotic weapons being unleashed on the world. That is more than an irrational fear, it is an inaccurate understanding of weapons that have been around for a long time. Robotic killer weapons have existed for over a century and no one seems to have noticed.

Meanwhile the troops can’t wait to get truly robotic aircraft. Not just aircraft that can take off, fly around doing something useful, and then land by itself but also software that can interpret what the cameras are seeing. All this isn’t science fiction but is already happening bit by bit. That’s how technology works and the troops would like this particular process sped up. Currently there are some smaller UAVs (like Raven) that can be launched by hand (throwing them) and then automatically fly a programmed route, speed and altitude, and then return and land via controlled crash. In this case the operator mainly acts as the observer, looking at the video for anything the unit commander (often standing nearby or looking at the video as well) can use. With these systems the operator can interrupt the automatic flight path and have the UAV circle a spot or control the UAV himself. But these UAVs are not fully robotic and they aren’t armed.

What the UAV operators want is more powerful and reliable automatic flight software as well as pattern analysis software good enough to understand what is being watched. Flight control software has been around for decades and is regularly used by manned aircraft. Some UAVs currently use this software to simply fly from one air field to another, as well as for automatic takeoffs and landings. What the UAV users want is software that merges with pattern recognition software (used by the video cameras) to alert human controllers if something of interest is spotted below and keep looking at this find until ordered to move on by the human controller.

There is also demand for UAVs that will fire at the enemy on their own. That is on the way. In 2011 an American firm developed software that enabled an armed UAV to seek out, identify, and attack (with a missile) targets, without any human intervention. While this created some alarming headlines, this capability is nothing new and first appeared during World War II. Work on these World War II robotic weapons has continued since then, much to the joy of the occasional journalist looking for a scary story.

Meanwhile the military keeps encouraging research in this area. In 2009 the U.S. Air Force released a report (Unmanned Aircraft Systems Flight Plan 2009-2047) predicting the eventual availability of flight control software that would enable UAVs to seek out and attack targets without human intervention. This alarmed many people, especially those that didn't realize this kind of software has been in service for a long time.

It all began towards the end of World War II, when "smart torpedoes" first appeared. These weapons had sensors that homed in on the sound of surface ships. This torpedo followed the target until the magnetic fuze detected that the torpedo was underneath the ship and detonated the warhead. The acoustic homing torpedoes saw use before the war ended, and even deadlier wake homing torpedoes were perfected and put into service (by Russia) in the 1960s. The “wake homing” torpedoes detected the wake of a ship and followed the wake to where the ship currently was and detonated.

Another post-World War II development was the "smart mine." This was a naval mine that lay on the bottom, in shallow coastal waters. The mine has sensors that detect noise, pressure, and metal. With these three sensors the mine can be programmed to only detonate when certain types of ships pass overhead. Thus, with both the smart mines and torpedoes, once you deploy them the weapons are on their own, to seek out and destroy a target. For over a century more primitive “contact mines” exploded when a ship ran into them. These weapons were not alarming to the general public but aircraft that do the same thing are.

Smart airborne weapons have also been in use for decades. The most common is the cruise missile, which is given a target location and then flies off to find and destroy the target. Again, not too scary. But a UAV that uses the same technology as smart mines (sensors that find and software that selects, a target to attack) is alarming. What scares people is that they don't trust software. Given the experience most of us have with software, that's a reasonable fear.

But the military operates are in a unique environment. Death is an ever-present danger. Friendly fire occurs far more than people realize or even the military will admit. Combat troops were reluctant to talk about friendly fire, mainly because of guilt and PTSD/combat stress, even among themselves, and the military had a hard time collecting data on the subject. After making a considerable research effort, several times after World War II, it was concluded that up to 20 percent of American casualties were from friendly fire. This helps explain why military people and civilians have a different attitude towards robotic killing machines. If these smart UAVs bring victory more quickly, then fewer friendly troops will be killed by friendly or hostile fire. Civilians are more concerned about the unintentional death of civilians or friendly troops. Civilians don't appreciate, as much as the troops do, the need to use "maximum violence" (a military term) to win the battle as quickly as possible.

The U.S. Air Force has good reason to believe that they can develop reliable software for autonomous armed UAVs and then manned aircraft. The air force, and the aviation industry in general, have already developed highly complex and reliable software for operating aircraft. For example, there has been automatic landing software in use for over a decade. Flight control software handles many more mundane functions, like dealing with common in-flight problems. This kind of software makes it possible for difficult (impossible, in the case of the F-117) to fly military aircraft to be safely and accurately controlled by a pilot. Weapons guidance systems have long used target recognition systems that work with a pattern recognition library that enables many different targets to be identified and certain ones to be attacked. To air force developers, autonomous armed UAVs that can be trusted to kill enemy troops, and not civilians or friendly ones, are not extraordinary but the next stop in a long line of software developments.

What civilians fear, and journalists exploit, is capabilities that weapons are a long, long way from having. These include persistence, as in the ability to keep at it for more than a few minutes or hours and replication (robots building robots). Without those two capabilities, robotic weapons are no real threat to mankind. And that’s why there was no general panic when robotic torpedoes, smart mines and guided missiles showed up over fifty years ago. But now we have selective memory colliding with headline hunting (or clickbait) and the road to the robopocalypse gets a little foggy.