Loading
WASHINGTON: A Pentagon AI programme called Project Maven is at the centre of the US strikes against Iran and potentially one of the most consequential transformations of modern warfare.
What is it?
Project Maven is the Pentagon’s flagship artificial intelligence programme, launched in 2017 as a narrow experiment to help military analysts make sense of the torrent of drone footage pouring in from conflict zones.
Operators were drowning in imagery, searching frame by frame for objects of interest that might appear for only a moment before vanishing. Maven was built to find the needle in the haystack.
Eight years later, the programme has evolved into something far more expansive: an AI-assisted targeting and battlefield management system that has vastly accelerated what is known in war-making as the kill chain — the process from initial detection to destruction.
How does it work?
Maven functions like both the air traffic control of battle and its cockpit. Aalok Mehta, director of the CSIS Wadhwani AI Center, described the system as “essentially an overlay” that fuses sensor data, enemy troop intelligence, satellite imagery, and information on troop deployment.
In practice, that means rapidly scanning satellite feeds to detect troop movements or identify targets, while also “taking a snapshot of the operational theater” to determine the best course of action for striking a specific target.
In a recent demonstration posted online, a Pentagon official described how Maven “magically” turns an observed threat into a targeting workflow, weighing available assets and presenting a commander with options.
The emergence of ChatGPT was another leap forward, broadening the use of the technology to a far greater range of users who can interact with Maven in natural language. For now, this capability is supplied by Anthropic’s Claude — though that arrangement is coming to a bitter end after the Pentagon bristled at the AI lab’s demand that its model not be used for fully automated strikes or the tracking of US citizens.
Why did Google say no?
The ethical question was a factor in Maven’s early years, when Google was the program’s original AI contractor. In 2018, more than 3,000 employees signed an open letter protesting the company’s involvement, arguing that the contract crossed a line. Several engineers resigned.
Google declined to renew when the contract expired, and subsequently published AI principles explicitly ruling out participation in weapons systems. The episode exposed a fault line in Silicon Valley between engineers who viewed autonomous targeting as an ethical red line and defence officials who saw it as essential.
More recently, Google removed its AI policy restrictions and said it is leaning further into national security work. The Pentagon has said that Google, along with xAI and OpenAI, are in the mix to replace Claude in Maven.
In 2024, Palantir — founded in part with CIA seed funding and built from the start around government intelligence work — stepped into the space Google vacated.
The company has reportedly become Maven’s primary technology contractor, and its AI now forms the operational backbone of the programme. Palantir CEO Alex Karp frames the stakes explicitly.
Published in Dawn, April 6th, 2026
if you want to get more information about this news then click on below link
More Detail