Despite the rise of AI ‘super-brains’ that help tanks and robots target the enemy, humans will always triumph over machines in war

Israel is seeking to deploy the ‘Athena program’, an artificial intelligence-based system designed to increase the battlefield lethality of its forces – but by subordinating humans to a machine, the IDF is inviting defeat.

Israel has reportedly lifted the shroud of secrecy involving its ongoing work on the militarization of artificial intelligence (AI), giving a UK-based newspaper, The Daily Telegraph, exclusive access to how the Israeli Defense Force (IDF) would employ an AI system known as the Athena Program on a future battlefield. The Athena Program is the brainchild of an Australian company, Cyborg Dynamics, which sought to develop a system that identifies and classifies objects and locations on a battlefield, and then communicates its findings to a combatant who would then decide which targets to engage and which ones to avoid for legal or humanitarian reasons.

The inventors of the Athena Program claim that their primary purpose was to assist in improving the ethics of war and helping protect civilian lives and property through proper classification. But the purpose behind Israel’s acquisition and adaption of the system was more brutal – to rapidly locate and classify hostile targets on a battlefield that could be provided in near-real time to a combatant commander, along with the best means of destroying these targets from a list of available weapons. In short, the Athena Program is seen by Israel as a sort of AI “super brain” that will provide Israel with unmatched lethality on any modern battlefield. As enticing as this kind of warfare may be to the uninitiated, the reality is that artificial intelligence will never beat human intelligence when it comes to the primal act of taking human life during war.

A battlefield sensor has multiple vulnerabilities, all of which any competent enemy combatant would seek to exploit. First and foremost is its own inherent lack of durability. The modern battlefield is an inherently harsh environment, with both man and equipment subjected to extremes, natural and man-made, that conspire to break even the most hardened human and/or machine. With systems such as the Athena program, the human-machine interface is essential, and as such both must be functioning at peak levels to achieve the level of combat performance envisioned by the supporters of AI. Humans break, physically and mentally, under the duress of combat. Machines fail. Performance is degraded the moment a battle is joined. The notion that the level of competence and functionality necessary for an AI system to perform as intended can be sustained under the duress of modern combat is a fatal fallacy.

For a sensor to function, it must be able to send and receive data. Every time a sensor seeks to collect information, it is exposing itself to detection and destruction. Likewise, every time a sensor receives information, it is exposing itself to the possibility of data corruption, either in the form of a virus or misinformation. Moreover, signals can be jammed and/or disrupted, intercepted, mimicked, and delayed. Any AI system is a prisoner of data, and as such, data becomes its greatest vulnerability. The art of deception has existed on battlefields since time immemorial. Decoys, ruses, feints and concealment have adapted to the realities of the present.

If a combatant were to invest in AI to the point that critical combat decisions were joined to that capability, then one can bet a dime to a dollar that any hostile force would make the destruction and/or disruption of the AI system a top priority. Electronic warfare would be employed to jam any sensors employed by the AI system, and counter-fires would be allocated to destroy AI sensors once located and isolated on or around the battlefield. Cyber warfare would be deployed to corrupt systems and programs integral to the AI system, and the human factor would be exploited with an emphasis on recruiting agents who could assist in negating the effectiveness of such a system. In short, in a war against peer or near-peer opponents, AI systems would have a short life expectancy.

Last, but certainly not least, is that there is no substitute for the ingenuity of the human brain when attached to the survival instinct of a person whose life is on the line. The machine has no family or nation to defend. A machine is incapable of being fueled by the primal instincts of a human desperate to survive. A machine knows no fear, and yet it is fear that makes a human so lethal during war. A human will, literally, do anything to stay alive, or to help others stay alive. A human is unpredictable. Man will always best machine in a life-or-death situation, simply because the machine cannot distinguish between life and death.

For Israel, the attraction of AI systems such as the “Athena program” are obvious – by maximizing battlefield lethality while simultaneously reducing the exposure of its soldiers to death and dismemberment, the Israeli Defense Force plays into a national aversion to casualties which has emerged as the Achilles heel of the Israeli way of war. The program is especially attractive along Israel’s tenuous border with Lebanon, where the IDF faces off against a very capable and lethal opponent in the form of Hezbollah. But it is only as good as the humans who program and operate it. If Israel’s constant cycle of conflict with Hezbollah has produced any ironclad insights, it’s that Israel is always taken by surprise when it comes to the capabilities of Hezbollah on the battlefield. Any future conflict will no doubt follow this path. As such, Israel will become a virtual prisoner to an AI system programmed to fight the last war, and as such not optimized to fight a future conflict.

Israel is not unique in this regard. The same inherent deficiencies, fallacies and prejudices that make AI impractical on any future Israeli battlefield hold true for all military forces worldwide. AI will always function as designed in peacetime, just like GPS navigation and military satellite communications. But when the GPS and SATCOM is down, soldiers better know how to navigate using a map and compass, and communicate using field telephones and runners, or else they will die. Likewise, any military that depends on AI to tell it where the enemy is and how best to engage it will find themselves on the pointy end of a hostile bayonet when the balloon goes up. AI will not direct the bayonet to be thrust into your guts—that happens when a human brain instructs muscle, blood and sinew to accomplish that task.

https://www.rt.com/op-ed/522193-ai-battlefield-humans-machines/

0 thoughts on “Despite the rise of AI ‘super-brains’ that help tanks and robots target the enemy, humans will always triumph over machines in war

Leave a Reply

Your email address will not be published. Required fields are marked *