Remote Combat and the Limits of Unmanned Airpower

Remote Combat and the Limits of Unmanned Airpower
Our defense establishment has become seduced by the idea of unmanned airpower. Futurists now predict an eventual combat force comprised of unmanned systems performing existing wartime missions, directed by teams of remote operators or perhaps even autonomously guided by artificial intelligence. Impressed with the performance of these developing machines in the benign counterinsurgency airspace of Iraq and Afghanistan, many observers have become appropriately enthusiastic about the future of remote combat. Unfortunately, this has caused unmanned air systems to develop a premature level of confidence throughout the defense community, and while the promise of remotely executed airpower is impressive, these views are now beginning to have a corresponding impact on discussions about the equipment of our force. Although these future concepts are probably inevitable and capture imaginations appropriately, there are several reasons our nation needs to exercise great care as we step towards the future of unmanned combat.

Unmanned combat systems have fundamental limitations which can make their technology a war-losing proposition. These limitations involve network vulnerabilities, release consent judgment, and most importantly, creative capacity during air combat and close air support missions. Although futurists might assume these problems away with grand ideas of technologies yet to be developed, during the next few decades these limitations will remain critical constraints on our ability to provide airpower in the joint fight.




Network Vulnerabilities
Over a century of aviation, Airmen have learned the value of a key airpower tenet: Centralized Control; Decentralized Execution. Centralized Control is what allows well informed experts in a distant headquarters to direct or redirect the firehose of airpower within minutes to engage emerging targets or to support troops under enemy fire. At the same time, Decentralized Execution allows these aviators who have been directed to the scene- equipped with datalink, weapons, and eyeballs- to quickly assess the situation beneath them and execute violence on the enemy with swift, decisive precision. When faulty radios, broken data links, or intentional enemy jamming interfere with communications, the combat pilot defaults to established rules of engagement to cover the troops or fight the enemy he sees in the air using predefined parameters of action.


Unmanned systems by their very nature violate this fundamental tenet of decentralized execution. Although some might argue that it makes no difference whether the pilot is flying overhead the fight or flying from a Nevada control facility, when the communication link goes down, it matters very much- Planes go dumb, weapons become useless, and American troops are left uncovered. The effect of a broken network is the same as if the CFACC himself puts down the joystick to take a lunchbreak. Even if we were to provide our remote operators with hyper-encrypted signals transmitted over unlimited bandwidth and an unrestricted ability to see, hear, feel, smell the same sensory inputs they would experience in the cockpit, this vulnerability limitation would still exist. No matter how good our systems are, our next enemy is going to find a way to break the link between operator and aircraft during wartime.


Ultimately, the links required for remotely piloted airpower to operate in a hostile environment will shift airpower towards all the negative aspects of centralized execution, and with that shift will come the potential for a hamstrung, ineffective air component. If we depend too heavily on these vulnerable data links, we will diminish America's capacity to rain hate on the enemy.


Release Consent
Because we know links to pilotless aircraft can become vulnerable to enemy attack during wartime, if we are to expect successful missions from these machines over the long run, our joint community must find methods to overcome this fundamental connectivity weakness. Our most obvious answers to network vulnerability are preplanned & preloaded missions for unmanned strike aircraft, and artificial intelligence for air combat or close air support missions.
Preplanned strike routing should be a relatively easy goal to achieve. Since we have been capable of programming aircraft to fly preplanned routing for decades, the idea of unmanned airpower in the strike role is now almost a given. Indeed it is a welcome development - send in the drones, have them avoid enemy defenses en route to the targets, bomb them automatically with JDAMs - and if we lose some aircraft, the loss is minimal. Engineers have made truly remarkable innovations to support this concept recently, but the concept of programming a machine to fly to a target and drop a bomb on a pre-approved spot is, on a relative scale, old news. Therefore, of the two solutions to the problem of broken data links, the single pivotal development which will facilitate unmanned air combat and unmanned close air support is Artificial Intelligence.


Before considering the cockpit complexities of these two difficult air missions, let's first consider the relatively simple problem of weapons release consent. This is a binary decision, now performed in the cockpit: shoot/don't shoot, or drop/don't drop. It is the primary life or death responsibility, entrusted to the pilot, and it is the last step in a long chain of events resulting in destruction of the enemy. Assuming that network vulnerabilities and the possibility of broken communications links discussed above drive this decision to be performed by an on-board computer, we would need to be satisfied that such a device could assimilate all the data required to verify a hostile enemy, and a lack of friendly troops or civilians nearby, and then make the binary decision to kill or not to kill.


Unfortunately, the information required to make such a decision comes from so many sources, and could be so easily spoofed or jammed by the enemy, that the validity of that computerized decision could never be fully trusted. As much as we'd like to build a flow chart which always spits out the right kill answer, the friction of war our enemy will inevitably inflict on these machines will cause the machines to either avoid killing a hostile target, or inadvertently kill Americans or civilians on close air support missions. Air to Air shoot decisions are equally complex, but require split second timing, and very gifted Americans spend many many years becoming proficient at this aspect of air combat. Although in some cases it might be performed best by a computer when electronic means can be used (it is a fairly simple decision process), in many others it takes a great deal of talent and practice because a computer cannot visually identify an enemy in the way a pilot can when environmentals, energy state, relative position, spatial orientation, and other factors combine. We learned decades ago that despite expectations of technology-enabled standoff, air combat often migrates into the visual arena, which is why we wisely returned to the practice of building fighters with guns. In the visual arena, hostile declaration can be an extremely difficult task.


In a similar way, when troops on patrol today begin to take hostile fire, aircraft are immediately dispatched to provide close air support, and pilots use sensors to build situational awareness and take action against the enemy as rapidly as possible. Sometimes, the solution is clear- the targets are simple and weapons employment is easy. Frequently, however, the problem is wicked. And we have seen a low-tech enemy in Afghanistan become quite successful at inflicting this sort of friction with strategic effect during recent close air support missions...against intelligent and well-trained aircrew doing their level best to make the right decisions.


Almost every time, when effects are requested by commanders on the ground under fire, they are immediately provided. In a few less known situations, however, combat indicators and release criteria have existed during combat close air support missions which might have permitted a computerized machine to release weapons on targets, but aviators have delayed the strikes based on last-second developments, or gut-level awareness that something just didn't seem right. This kind of judgment has saved lives many times, and regardless of whatever cognitive technology developments may emerge in the future, our innovators will have a very difficult time designing a computer with this kind of intuition. How can we possibly expect a machine to execute better judgement on release consent decisions than a human, when collateral damage is a factor? Collateral damage is always a factor, and the risks are too great to trust a machine against a thinking, adaptive enemy.


Cockpit Creativity
Beyond the simplicity of the binary kill decision lies the artificial intelligence problem of beating a creative opponent. Unlike a chess game with a finite number of possible moves on a two dimensional board, or a even computerized video game with a given universe of environmental conditions, air combat and close air support missions occur within a context of unlimited variation. Weather, terrain, other aircraft, enemy-inflicted friction, and countless other variances make each combat occasion different. The way a pilot wins an air-to-air engagement is to see the opportunities created by these conditions and exploit a particular advantage over the enemy, staying within the decision loop of the enemy until he kills him. Air to Air combat is not a task that can be easily relegated to a computer, because a thinking enemy will always find a way to use technology, confusion, or unpredictability to beat that computer. What separates men from machines is the ability to see opportunity and use it creatively.
Similarly, during critical close air support missions when troops are under fire, pilots often have to resort to creative right-brained processes to sort out dynamic and confusing situations, seeking opportunities in a different kind of environment to exploit maneuver, troop communications, and inputs from many sources to determine the best course of action. Although observers watching video links from drone aircraft in headquarters tend to feel a certain awareness of these engagements, what they actually witness is a small fraction of the picture. A pilot's ability to act on a variety of inputs which can be applied in the cockpit often makes close air support for troops under fire an intensely creative exercise, and it is one best executed by a person. Depth perception, color, shadow, experience detecting patterns of life, and a willingness to change perspective in the air real time to generate opportunities - all contribute to the creative process that is CAS.


It's one thing to receive coordinates, target a sensor, and shoot weapons at a given point in support. Predators do that now. Artillery can do that. It's not special. But an entirely different endeavor is to locate a smart, moving enemy hiding among rocks or in an urban setting, while coordinating other aircraft on multiple freqs and recommending friendly ground movements, all the while optimizing orbit shape and climbing or descending in altitude as weather and terrain change over time to find and root out evil. It is an Art. It does not lend itself to scientific algorithms, and if it did, the enemy would find a way to beat them. We are a very long way from fielding the kind of artificial intelligence that can perform these missions as well as skilled, creatively perceptive aviators who are actually present, overhead in the fight.


The Way Forward
Airmen jealously guard our nation's capacity to perform these missions because Air Superiority is so fundamental to joint campaign success, and because Close Air Support is so critical to saving American lives. As a contributor to the success of those missions, unmanned airpower will undoubtedly take on an increasing role in the future of American combat; indeed it already is. Therefore we should aggressively work towards network security and artificial intelligence technologies which support the fundamental tenet of Centralized Control; Decentralized execution. As these capabilities mature, we should immediately leverage these robotic innovations and equip unmanned airpower with these technologies. But we must also continue to sustain the types of systems we now have over the long term. We must caution our joint community against accepting a near-term force procurement vector around a premature concept of unmanned airpower, which at least in the near-to-mid term has inherent limitations and which may generate potentially unacceptable risks.


America's airpower is best in the world, not only because our nation has always embraced and employed the latest cutting-edge technology, but because we have also accumulated a century of corporate knowledge in the conduct of air warfare. Our Air Superiority and Close Air Support capabilities are particularly creative specialties, heavily dependent on a type of artistic capacity passed down over time from master to apprentice. Although the day will probably come when technology alone might accomplish these creative functions during combat, we would be foolish to jump on this bandwagon too soon.
.
Aircraft development takes decades, and if we rush to adopt a force structure which optimistically embraces this potential too quickly, our nation may find itself with a gap in airpower's capability to preserve joint freedom of movement across the battlespace. Today we are able use our cold war 1980s-era F-15E and B-1 interdiction platforms in the challenging CAS role over Afghanistan, because creative individuals in those cockpits can adapt those airframes to the differences inherent in an environment and mission set we never dreamed of for those jets. Conversely, while unmanned strike aircraft might make sense now for the interdiction role, applying them to air combat or close air support missions during future combat could become an entirely different proposition. When the enemy exploits weaknesses in our command and control system, these airframes will be missing the one element which makes them adaptive in air combat - the pilot.


For victory in the next war, our nation will depend on Air Superiority, and our engaged troops under fire will depend on Close Air Support. We must not yet cede airpower to overly enthusiastic ideas of unproven remote combat technology until that technology clearly demonstrates it is independent of vulnerable data links, and exceeds the capabilities of skilled aircrew, in cockpits, applying creative genius as they fight a thinking, maneuvering, and determined enemy.

Return to Three Capes Website 

No comments:

Post a Comment