by
Austin BayMay 4, 2021
In April, the U.S. Navy's Pacific Fleet ran a much-anticipated Fleet battle problem. Arguably, its dull official name, Unmanned Integrated Battle Problem 21, was a touch misleading.
Navy Fleet battle problems have a remarkable history for testing technology, training sailors, developing organizations and informing long-term decisions that have greatly benefited the United States. I'm referring to the 1920s and 1930s battle problems, which always had a trans-oceanic campaign against Japan as their strategic backdrop.
Prescient? Yes. And the battle problems were rigorous in execution and detailed in evaluation.
2021's fleet problem, under wartime-like operational conditions, tested gee-whiz military and communications technology, specifically unmanned warships and unmanned aircraft, some remotely controlled, some autonomously or semi-autonomously controlled. Autonomous-type systems can be classified as robots. They have a degree of "artificial intelligence" to guide the system's ability to maneuver, communicate and shoot.
Armed robot war machines could and should sound threatening, but as a topic, they are not dull. They are sci-fact, not sci-fi, in the U.S. and in China. Russia has robotic war machines.
As for claiming the name's slightly misleading, I'm nitpicking the unmanned moniker to make a point: The exercise integrated the unmanned gee-whiz with definitely manned warships, manned aircraft, manned land systems and human beings at communications consoles attempting to monitor and coordinate the complex test.
War is a human endeavor, even when robots and AI are involved, and war is the realm of the unexpected. The unexpected requires human brains.
The battle problem ran from April 19 to April 26, a short time span, but I'm guessing the Pentagon will be crunching the data gleaned for many months, perhaps a couple of years.
Understand I don't mean just digital data and gee-whiz techno videos with missiles blasting target ships. Sailor, aircrew, cyber warrior and other human observer and non-robot participant feedback is critical if the goal of this fleet battle problem is what it should be: creating and deploying the war-winning military capabilities and forces America needs and American taxpayers deserve.
During the exercise, an SM-6 missile fired by the USS John Finn destroyer struck a simulated enemy warship at long range. An MQ-9 Sea Guardian "unmanned aerial vehicle" provided accurate, real-time targeting data that all but insured the bull's-eye.
Great video -- but what if the MQ-9 and the destroyer and the missile had faced high-intensity cyber- and electronic interference, of the caliber many analysts think China is capable of generating?
That question relates to this question: Will the 21st-century Navy be as rigorous in evaluating Unmanned Integrated Battle Problem 21 as it was evaluating battle problems in the 1930s?
Dr. Albert A. Nofi's book "To Train the Fleet for War" provides an in-depth look at how the Navy conducted its fleet battle problems during another era of enormous technological change, the one between World War I and World War II. In those exercises, the Navy examined planning; battle tactics; air operations (carrier and shore); submarine and anti-sub ops; communications; cryptology (offensive and defensive, a kind of cyber equivalent); and perhaps the biggest of all, sustainment of a fleet operating at long distances from major naval bases.
In an email, Nofi told me the pre-World War II fleet problems' main purpose "was to practice how to 'make war at the drop of a hat,' as one admiral put it." In retrospect, input by informed observers was vital. Nofi recommended the Navy "find reporters who are reliable enough to keep their mouths shut if they encounter anything classified, but who know enough about the naval service to report on the problems."
That sounds like great advice.