On March 20, 2026, Deputy Secretary of War Steve Feinberg said in a letter to the Pentagon that the Maven artificial intelligence system will become an official program of record. The order stated in the letter will go into effect by the end of fiscal year 2026 (September), solidifying the long term use of Palantir Technologies’ weapons-targeting AI services across the institutions of the US Army. But how exactly does a company built by Silicon Valley “techbros” end up leading the charge in how the imperial army of our time chooses its targets? And how does that choice-making system lead to the direct targeting of a school for underage girls?

Palantir Technologies has spent years positioning itself not simply as a corporate contractor to the United States Department of War, but as a central architect of how its modern empire is conceived, executed, and expanded. Founded on the premise that data can be weaponized at scale, the company now sits at the intersection of Silicon Valley ambition and Pentagon imperial strategy, securing billions in public funds while embedding its software deep within US military operations. With a growing portfolio of defense contracts and direct lines into decision-making bodies, Palantir is no longer a peripheral player. It is an entity whose tools, executives, and institutional relationships are actively shaping the trajectory of US military operations abroad, including the most recent belligerent invasion and blatant military criminality against Iran.

That influence has translated into extraordinary financial and political gains. A multibillion-dollar contract pipeline, including agreements worth up to $10 billion with the U.S. Army and $1.3 billion tied to its Maven system, has helped drive the company’s market valuation to staggering heights, reportedly around 360 billion USD. At the same time, senior defense officials have moved to formalize the long-term adoption of Palantir’s technology across all branches of the military, signaling a level of dependence that goes beyond conventional procurement. The deeper this integration becomes, the more Palantir stands to benefit from prolonged conflict, as each expansion of hostilities increases the demand for the very systems it provides.

Nowhere is this convergence of profit, policy, and power more evident than in the early days of the U.S.-Israel war on Iran. Within the first 24 hours of the campaign, the Maven AI military targeting system was used to generate hundreds of strike coordinates, enabling a scale and speed of operations that would have been impossible under traditional methods. Among those strikes was the destruction of the Shajareh Tayyebeh elementary school in Minab, a site that had been active for years and publicly visible, yet was nonetheless classified within a targeting pipeline that culminated in the greatest tragedy this round of attacks against the Islamic republic has resulted. The attack killed 168 people, including more than 100 of the bright young girls making up the larger percentage of Iran’s highly educated populace, raising urgent questions not only about how such a failure occurred, but about the role of the company whose technology helped make such horrible criminality possible on an even greater scale than before.

As scrutiny intensifies, Palantir and its defenders have pointed to human oversight as the ultimate safeguard, insisting that its systems do not make lethal decisions. Yet internal performance data and battlefield reporting tell a more troubling story, one in which flawed datasets, compressed decision timelines, and institutional pressure to act quickly create conditions where errors are not just possible, but predictable. Even as accuracy rates for AI-assisted identification lag behind human analysts in many conditions, the U.S. military continues to expand its reliance on these tools, while simultaneously reducing the very oversight mechanisms designed to prevent civilian harm.
This investigation examines how Palantir has embedded itself within the U.S. military apparatus, the network of officials and incentives that sustain its rise, and the material benefits it derives from the US’s ongoing illegal assault on Iran. It also traces the chain of decisions and systems that led to the strike on Minab, not as an isolated incident, but as a case study in what happens when corporate technology, state violence, and automated warfare converge.
“Seeing Stones” to State Surveillance
Palantir Technologies takes its name from the Palantíri of The Lord of the Rings, the “seeing stones” that allowed their users to observe distant events and gather knowledge across vast spaces. In Tolkien’s world, these objects were instruments of perception and strategy, used by powerful actors to extend their awareness far beyond the limits of ordinary sight. That reference point is not incidental. From its earliest days, Palantir has framed its mission in similar terms: to give institutions the ability to see more, know more, and act faster by transforming overwhelming volumes of data into coherent, actionable intelligence. Funnily enough though, in Tolkien’s world, anyone who used the Palantiri were destined for an eventual doom.
Founded in 2003 by Peter Thiel, Alex Karp, Stephen Cohen, Joe Lonsdale, and Nathan Gettings, the company emerged in the immediate aftermath of 9/11, at a moment when U.S. intelligence agencies were under intense pressure to prevent future attacks. Its founding premise was that the necessary information often already existed, scattered across disconnected systems, and that the real failure was the inability to connect it. With early backing from Thiel and investment from the CIA’s venture capital arm, In-Q-Tel, Palantir was built from the outset to operate inside the national security ecosystem, not outside it.

That origin shaped both its technology and its clientele. Rather than developing consumer-facing products, Palantir focused on building platforms like Gotham, designed specifically for intelligence, military, and law enforcement use. These systems allowed agencies to integrate disparate data sources—ranging from surveillance records to financial transactions—and map relationships between people, places, and events in a single interface. The company’s later platform, Foundry, extended similar capabilities to large corporations, but its core identity remained rooted in creating the data foundation for the tools of imperial enforcement within US security and intelligence institutions. Early clients that used Palantir before it was even a publicly traded company included the CIA, NSA, FBI, and the Department of War, relationships that would expand significantly over the following decade.

Internally, Palantir cultivated a culture that mirrored both its origins and its ambitions. Its offices adopted names drawn from Tolkien’s universe, and its workforce embraced a language that blended tech culture with military terminology. Engineers were “forward deployed,” embedded directly within client organizations to adapt and “operationalize” Palantir’s systems in real time. This model allowed the company to move beyond the role of a traditional software vendor. It became a regulatory presence inside the institutions it served, shaping not just the imperial tools being used, but how those tools were applied.
At the center of this structure is Peter Thiel, whose influence extends beyond financing into the ideological direction of the company. A billionaire investor known for his libertarian views and skepticism toward democratic governance, Thiel has consistently framed technological advancement as a means of preserving Western dominance in an increasingly unstable world. Under his guidance, Palantir has aligned itself explicitly with US strategic interests, a position reinforced by CEO Alex Karp’s repeated emphasis on supporting “the West” and its military institutions. This alignment is not rhetorical. It is reflected in the company’s contracts, partnerships, and the language it uses to describe its role in global affairs. Palantir seeks to quite literally, and proudly, paint itself as a weapon in the hands of “Western” empire.

Financially, that positioning has paid off. Palantir’s growth has been driven by a steady expansion of government contracts worldwide, including multibillion-dollar agreements with the US military. When the company went public in 2020, it did so with a business model already deeply intertwined with state power. Major institutional investors such as Vanguard, BlackRock, and State Street, private equity firms that are so inflated financially that some economic analysts have called them the “owners of the world”, joined Palantir’s founders in holding significant stakes, further embedding it within the broader financial system that benefits from sustained government spending and digitalization of assets.

Yet for all its visibility, Palantir remains difficult to neatly categorize. It does not collect data in the traditional sense, nor does it operate as a simple analytics firm. Instead, it builds the infrastructure through which organizations interpret their own information. Its software sits atop existing systems, allowing users to integrate, analyze, and act on data without restructuring the underlying architecture. Former employees often struggle to describe it in concrete terms, resorting instead to analogies that emphasize its breadth and flexibility.

What distinguishes Palantir is not just the scale of its technology, but the context in which it is deployed. Its platforms are designed to operate in environments defined by urgency, uncertainty, and high-stakes decision-making. Whether used to track financial activities, monitor populations, or support military operations, they function as tools that condense complexity into apparent clarity. In doing so, they do more than assist decision-makers. They shape the framework within which decisions are made, determining what is seen, what is prioritized, and ultimately, what actions are taken including, but not limited to: the invasion and destruction of sovereign countries such as the Islamic Republic of Iran.
Inside the “Kill Chain”
This “apparent clarity” that the Maven system provides gives its users an inflated sense of confidence in military operations, masking the uncertainty embedded within its own processes. At its core, Maven functions in a manner not entirely dissimilar to advanced chatbot systems: a black box trained on vast datasets, producing outputs that appear coherent and authoritative, while concealing the exact pathways through which those conclusions are reached. For operators within the US military, this means that target identification, prioritization, and strike recommendations can be generated at speeds and volumes previously unimaginable, yet without full transparency into how or why specific targets are selected. This opacity has enabled a dramatic escalation in both the tempo and scale of operations, exemplified in the opening phase of the US campaign against Iran, where more than 1,000 targets were struck within 24 hours. Among those strikes were the events that led to the martyrdom of Ayatollah Sayyed Ali Khamenei and the martyrdom of 168 school girls and staff members at the Shajarah Tayyebeh school in Minab.
Comparative analysis of satellite imagery by Amnesty International has revealed that the school had been clearly established for years, physically separated from a nearby Islamic Revolutionary Guard Corps installation by a wall constructed well over a decade ago. The site maintained an active presence and exhibited none of the characteristics typically associated with a concealed or dual-use military target. Yet within the targeting architecture that governed the strike, this distinction appears to have collapsed. The area was treated as part of a broader military compound, suggesting that either the underlying intelligence was outdated, or that the system processing it failed to properly distinguish between adjacent civilian and military spaces.
To understand how such a failure could occur, it is necessary to examine how Maven operates in practice. The system ingests massive quantities of multi-format intelligence, including satellite imagery, drone feeds, radar data, and signal intelligence such as phone metadata and digital communications. Within this stream of information, it identifies what military analysts refer to as “patterns of life,” mapping relationships, behaviors, and movements that might indicate the presence of a legitimate target. These patterns are then used to generate target nominations, feeding into what has become an ever-expanding “target bank” for review and potential action.
In an interview with Democracy Now, Craig Jones, a senior lecturer in political geography and expert on modern warfare, explained that artificial intelligence systems like Maven are fundamentally designed to accelerate the “kill chain”—the process of identifying, approving, and striking targets. He noted that what once required “tens of thousands of hours” of human labor can now be reduced to “seconds and minutes,” compressing complex analytical workflows into near-instantaneous outputs. This acceleration is not merely a technical improvement; it restructures how decisions are made, shifting the balance from deliberate human assessment toward rapid, system-driven recommendations.
That speed is reinforced by Maven’s architecture. Built on Palantir’s data integration platform and incorporating Anthropic’s Claude AI model, the system operates as a layered environment in which data is aggregated, analyzed, and translated into actionable outputs. As described in the same Democracy Now interview, these systems do not simply analyze intelligence but actively nominate targets, generating large volumes of potential strikes that are then passed along for human approval. The result is a continuous flow of recommendations that can number in the hundreds or thousands within a single operational window.
In the case of the Minab strike, the same interview with Craig Jones highlights how such systems can fail in ways that are both technical and procedural. According to his analysis, the area in which the Shajarah Tayyebeh school was located appears to have been classified as part of a broader military compound based on outdated intelligence. Once that classification entered the system, it likely shaped how subsequent data was interpreted, reinforcing the designation rather than challenging it. Jones emphasized that even basic observational methods—such as monitoring the site in real time—could have revealed clear civilian activity, including the daily arrival of students. The failure, therefore, was not only in the data itself but in how that data was processed and validated.
This raises a critical question about the role of human oversight within such systems. Officially, military doctrine maintains that human operators remain responsible for final strike decisions. However, as Jones pointed out in his Democracy Now interview, the persuasive nature of AI-generated recommendations, combined with the speed at which they are produced, creates conditions where human review becomes increasingly compressed and, in some cases, perfunctory. When faced with a high volume of system-generated targets, operators may rely more heavily on the perceived authority of the system rather than conducting independent verification.
Further compounding this issue is the broader institutional context in which these systems are deployed. Jones noted that recent policy shifts have reduced the presence and influence of military legal advisors and civilian casualty assessment teams, weakening the safeguards that traditionally served to challenge or halt questionable strikes. In such an environment, the outputs of systems like Maven are less likely to encounter resistance, allowing flawed or incomplete intelligence to pass through the decision-making pipeline with minimal friction.
The result is a system that does not simply assist human judgment but actively reshapes it. By accelerating the kill chain and presenting its outputs with an aura of precision, Maven creates a feedback loop in which speed is equated with effectiveness, and volume with success. In that loop, the margin for error narrows, while the consequences of those errors expand. The strike on the Shajarah Tayyebeh school, as examined through the lens of both technical analysis and firsthand expert commentary, emerges not as an isolated incident, but as a predictable outcome of a system designed to prioritize speed, scale, and operational dominance over deliberation and verification.
Tolkien’s Promised Doom
What emerges from this investigation is not a story of a single mistake or an isolated failure, but of a system—and a company—whose influence now extends far beyond the realm of software. Maven is not merely a tool that assisted in the targeting of the Shajarah Tayyebeh school; it is part of an infrastructure that made such an outcome not only possible, but increasingly likely. Its ability to process vast quantities of data, generate targets at scale, and compress decision-making timelines has reshaped how the United States conducts war. In doing so, it has also reshaped the conditions under which errors occur, magnifying their frequency and their consequences.
That shift places Palantir at the center of responsibility. As the architect of the system and a primary beneficiary of its expansion, the company cannot be separated from the outcomes it helps produce. The same capabilities that enabled the rapid generation of over a thousand targets in the opening hours of the war on Iran also formed the basis upon which that war was conceived and justified. When decision-makers are presented with a continuous stream of machine-generated “insights,” the line between analysis and advocacy begins to erode. In that environment, the argument for escalation is no longer made solely by officials or strategists, but reinforced—quietly and persistently—by the outputs of the systems they rely on.
This raises a more unsettling possibility: that the reliance on Maven was not only instrumental in the conduct of the war, but in the decision to wage it in the first place. A system designed to identify threats at scale inevitably produces them, filling target banks and operational plans with an abundance of actionable options. In such a framework, restraint becomes harder to justify, while action becomes easier to execute. The war, initiated without congressional approval and framed as a demonstration of overwhelming technological superiority, reflects this dynamic. It is a conflict shaped not only by political will, but by the capabilities of the systems that inform it.
Yet those capabilities have not delivered the outcomes they promised. The stated objectives of the campaign remain unmet, even as its human cost continues to grow. The overreliance on Maven—on its speed, its scale, and its perceived precision—has exposed the limits of algorithmic warfare. What was presented as a tool for clarity has instead introduced new layers of ambiguity, where flawed data, compressed timelines, and reduced oversight converge into decisions with irreversible consequences. The strike on Minab stands as the most visible manifestation of that failure, but it is unlikely to be the last.
In the end, the parallel that Palantir’s founders once drew from Tolkien’s palantíri becomes difficult to ignore. Those who used the stones believed they were gaining a clearer view of the world, when in reality they were seeing only what the system allowed them to see, shaped by forces they did not fully understand. The result was not control, but a narrowing of perspective that led, inevitably, to ruin. Today, as artificial intelligence systems like Maven become embedded at the highest levels of military decision-making, a similar pattern begins to take shape. The promise of total awareness gives way to a dangerous illusion of certainty—one that obscures responsibility, accelerates violence, and leaves its consequences to be measured not in data points, but in the faces of the martyrs lost in places like Minab.

Source: Al-Manar English Website
