Loading
The US-Israel war with Iran has shifted conflict from battlefields to data centres, software engineers and civilian technologies — turning them into both enablers of military power and its most vulnerable targets.
As the line between military and civilian infrastructures blurs, urgent questions emerge about the laws and ethics of war. Meanwhile, rapid technological development draws an ever larger number of civilians into the expanding footprint of the war machine.
Operation Epic Fury — February 2026’s joint US-Israel strikes on Iran — demonstrates how artificial intelligence has transformed modern warfare.
Relying on AI systems that can rapidly analyse vast quantities of diverse data, the US Central Command conducted an initial wave of 900 strikes within a remarkably narrow 12-hour window in a feat that would have been logistically impossible under traditional human-centric planning.
Such a spectacular speed of kinetic operations arises from integrating commercial AI systems, such as Claude AI, produced by technology giant Anthropic, into military intelligence platforms. These systems sift through satellite imagery, drone feeds, intercepted communications, and even social media activity, turning raw data into recommended targets within minutes.
This, however, is not the only example of the use of advanced AI in the theatre of conflict.
The Israeli Defense Forces (IDF), specifically the elite intelligence Unit 8200, has operationalised two primary AI decision support systems: ‘The Gospel’ (Habsora) and ‘Lavender’.
The Gospel automatically identifies structural targets — buildings, equipment and key infrastructure — in enemy territory at breakneck speed. While a team of human intelligence officers might traditionally generate 50 targets in a year, by automatically extracting intelligence out of vast information troves, The Gospel can generate 100 targets in less than two weeks, allowing sustained high-intensity bombing.
Lavender, meanwhile, complements The Gospel by identifying suspicious individuals based on their social connections, communication patterns, and movement profiles.
During the first six weeks of ‘Operation Iron Swords’ in the Gaza Strip during 2023-2024, Lavender reportedly identified as many as 37,000 Palestinian men as potential targets for assassination. The system was often used together with the crudely named, “Where’s Daddy?”, an automated tracking program designed to alert military units when a targeted individual entered their family residence, facilitating strikes that, while precise in targeting the individual, also killed others around them.
The movement toward “speed of thought” warfare — where AI identifies and prioritises targets faster than human cognition can process — paradoxically makes the sophisticated digital assets that enable it the most vulnerable targets.
Consequently, a defining feature of the Iranian response to a war imposed by a joint US-Israel force has been the elevation of computing and data infrastructure to the status of a primary military target. In March 2026, the Iranian Islamic Revolutionary Guard Corps (IRGC) struck some premier data centres in the United Arab Emirates (UAE) and Bahrain. The Guardian reported this as perhaps the first documented instance of a nation-state deliberately targeting commercial data centres to degrade the military and intelligence capabilities.
Primarily supporting key civilian services, such as banking and healthcare, these data centres are business-critical, unprotected, and often as large as small cities due to the enormous computing power they concentrate.
And this is what makes them easy to detect and disrupt.
They also depend on fragile local power grids and consume huge amounts of electricity, which means that if damaged, essential components such as transformers and cooling systems can take months to replace. One successful strike or outage could cripple AI services across an entire region for the duration of a conflict.
It is for this reason that Iran published a list of “tech targets”, including facilities belonging to Microsoft, Google, IBM, Nvidia, and Oracle. The Islamic republic argued that although civilian-owned, these structures are legitimate military targets due to their role as “functional extensions of adversarial power”.
The move represents a “Hormuz Ultimatum” in the digital age: blocking compute-flow for AI-enabled militaries, much like constricting oil flow through the Strait of Hormuz to adversarial economies.
To improve resilience, military planning frameworks, such as American global policy think tank RAND Corporation’s work on AI-enabled warfare, are shifting toward distributed systems in which control is spread across multiple locations.
Crucially, however, this decentralisation enmeshes military operations in everyday civilian infrastructure. The most glaring example of the “civilianisation of warfare” is Pentagon’s demand that Anthropic — an American AI safety and research company — lifts restrictions on its Claude AI model to allow for “any lawful use”, along with unrestricted access.
When the company refused, US Secretary of War Pete Hegseth labelled Anthropic a “supply chain risk”, expressing his view that any civilian technology is a strategic military asset and that non-cooperation is a blatant betrayal.
And this is the boundary where civilian technology is coercively absorbed into the war effort.
The shift toward distributed AI and the reliance on commercial infrastructure has a chilling logical endpoint: the loss of protected status for the global technical workforce.
As boundaries between commercial and military networks blur, civilian IT professionals supporting the digital foundations of an information-age military are being drawn into the legal and kinetic crosshairs. After all, under International Humanitarian Law, civilians lose their protection from attack “for such time as they directly participate in hostilities”.
However, legal experts from West Point’s Lieber Institute highlight that applying this rule to software engineers and AI developers is deeply ambiguous. Does the act of writing code for an AI system meet the threshold of direct participation, regardless of how the system is used? Does the engineer remain liable and hence targetable while the AI remains in military use? In a distributed AI spread across a diffused infrastructure can those involved in its maintenance be considered combatants?
These questions are no longer merely theoretical. Tehran has already signalled regional headquarters and engineering offices of many AI companies as high-value targets.
You too Brutus!
The arrival of ChatGPT, and the host of competing AI models it has inspired, has revolutionised the ability to “spot and eliminate” IT professionals or military operatives hidden within a general population.
Modern AI technology can extract actionable information from vast amounts of biographical and social information that people share online, both knowingly and unknowingly.
These systems can then cross-correlate this information with any raw intelligence gathered otherwise, including healthcare records, satellite imagery, and any traces of information linked to an entity of interest.
An adversary can therefore design AI agents to build vulnerability profiles for every faculty member at a defence-linked university or every engineer at an IT provider, identifying potential risk scenarios for coercion or kinetic elimination.
AI doesn’t only enable warfare; it threatens to consume its own ecosystem. And this dangerous potential is amplified by the fact that AI can be a loose cannon.
Internal Israeli reviews for Lavender reported an accuracy rate of 90 per cent. While that sounds high, at the scale of the Gaza invasion, it wrongly placed roughly 3,700 people on a kill list, many of whom were journalists, human rights activists, or displaced persons. Their behavioural patterns (such as frequently changing phone numbers) were misread by AI as signs of militant activity. Human oversight was minimal, reported The Guardian, adding that human operators often spent as little as 20 seconds reviewing a target recommended by AI.
The rate at which AI works compresses time for human judgement, forcing commanders to rely primarily on computer-generated prioritisations instead of raw situational understanding. If allowed further autonomy, AI’s erroneous automatic decisions can risk driving escalation ladders at a pace and scale that will leave little room for diplomatic intervention.
Just the way World War II gave us civilian technology, such as microwaves, ovens and radars, today’s conflict may too gift us AI-enabled tools of daily utility.
But unlike its predecessor, this modern incarnation of war enterprise carries a darker promise: it risks folding entire societies into a permanent state of threat management, where suspicion seeps deep into everyday life and the technological capacity to pursue perceived enemies at scale is already in place.
As the UN watches helplessly and war-ethics are relegated to a choice of convenience, we must confront the ultimate question: are we on the cusp of unprecedented greatness, or on the brink of our own automated annihilation?
اگر آپ اس خبر کے بارے میں مزید معلومات حاصل کرنا چاہتے ہیں تو نیچے دیے گئے لنک پر کلک کریں۔
مزید تفصیل