AI warfare is here: How intelligent drones Harop and Heron fronted India’s Operation Sindoor
Ukraine has managed to stay in the game against powerful conventional Russian forces through jerryrigged autonomous and AI-guided drones, with small, first-personview (FPV) attack drones, guided by algorithms, destroying more Russian armour than any other weapon category.
Meanwhile, in Gaza, the Israelis have used advanced algorithms, code-named The Gospel and Lavender, to sift intelligence and suggest targets in real time. In 2020, a Turkish-made Kargu-2 attack drone may have autonomously hunted down fighters in Libya without human orders— possibly the first lethal strike by a truly autonomous weapon.
In our imagination, AI warfare is about ar mies of futuristic Terminator robots marching in tandem as they go to war; in reality, the age of AI warfare has already begun. As with everything with war and AI, this kind of warfare using lethal autonomous weapons (LAWs) poses disconcerting questions. LAWs are machines that can identify, select and kill targets without human intervention. Unlike nuclear weapons, these systems are relatively cheap, scalable and hard to control once unleashed.
The level of human control can vary, from “human-in-the-loop” systems requiring authorisation for engagement , to “human-on-the-loop” where a human can override autonomous actions, and finally “humanout-of-the-loop” systems operating without any human involvement post-activation. This possibility of a new kind of war, where a machine makes lifeand-death decisions, has spurred further calls at the UN to ban such weapons.
There are fears among ethicists and human rights bodies of accidental escalation, loss of accountability, or full-scale “drone wars” with no human restraint. Clearly, nations are not on the same page, as furious development continues among major powers that see military gains in letting AI take the reins. AI warfare has gone beyond tactical advantages to established policy, with the Chinese military doctrine explicitly mentioning “intelligentised warfare” as its future. While the notion of LAWs and AI warfare is horrific, this article deliberately steps beyond the familiar “ban or regulate” discourse to explore a few contrarian and counterintuitive views that argue AI could perhaps make war more humane.
Can it save human lives?
One counterintuitive argument is that outsourcing war to machines could save human lives. If robots can shoulder the most dangerous tasks, human soldiers stay out of harm’s way. Maybe it is better to send a disposable machine into a kill-zone to fight another machine, than a young soldier trying to kill another?
Recent conflicts hint at this lifesaving potential: Azerbaijan’s victory over Armenia in the 2020 Nagorno-Karabakh war, for example, was achieved largely through superior drones, greatly reducing its own casualties.
This could potentially usher in an era of “boutique wars” or persistent, low-intensity conflicts waged primarily by AI systems, flying below the threshold that typically triggers major international intervention.
This sounds tempting but has the downside of making war “risk-free” for the side that has more of these killer machines, making leaders grow more willing to launch military adventures.
Can it make warfare more ethical & precise?
A second contrarian idea is that AI might make warfare more ethical by improving precision. Most militaries already try to minimise collateral damage, as India has been trying to do in Operation Sindoor. AI tools could make “surgical” strikes even sharper. Human soldiers, despite their valour, are prone to error, fatigue and emotion.
AI systems, theoretically, can be trained to avoid civilian zones, assess threats more accurately and stop operations when rules of engagement are violated.
Theoretically, an autonomous AI system can be programmed to never fire at a school or a hospital, and it will emotionlessly obey this every single time. Imagine an AI drone that aborts a strike mid-flight because an ambulance enters the frame, something a human pilot might miss in the fog of war. Even the Red Cross has acknowledged that AI-enabled decision support systems “may enable better decisions by humans… minimising risks for civilians”.
The notion of a “clean war” enabled by AI precision can be a doubleedged sword. The same Israeli AI system that identified militants in Gaza also churned out algorithmic killlists with minimal human review. If flawed data or biased algorithms mislabel a civilian as a threat, an AI could kill innocents with ruthless efficiency. AI can enhance compliance with the laws of war, but it cannot substitute for human judgment.
Can it make war transparent?
Operation Sindoor has highlighted the danger of misinformation and deepfakes being peddled by mainstream media. AI could change this.
Autonomous systems log everything—location data, video footage, target decisions—opening up the possibility of “algorithmic accountability”, with every strike audited, and every action justified, or condemned.
Can it be a new deterrent?
Perhaps the most novel contrarian view is expressed in a recent paper “Superintelligence Strategy: Expert Version” by Eric Schmidt and others, where they borrowed from the Cold War nuclear deterrent of MAD or Mutually Assured Destruction, to propose the concept of MAIM or Mutual Assured AI Malfunction.
The idea is that as AI becomes core to military systems, nations may hesitate to strike each other, because attacking one AI system could cause unpredictable ripple effects across both sides.
The inherent vulnerability of complex AI systems to sabotage—through cyberattacks, degradation of training data, or even kinetic strikes on critical infrastructure like data centres—creates a de facto state of mutual restraint among AI superpowers.
MAIM flips the script on dystopia: instead of AI dooming us, the mutual fear of runaway AI could keep rival powers’ aggressive instincts in check. It does seem surreal to discuss how AI could actually make war more humane, if there is such a thing, rather than making it even more horrific than ever.
The contrarian perspectives above challenge our instincts, and many would recoil at the idea of killer robots marching in. However, with so much of it becoming reality, we can no longer avoid these questions.
We can choose to look at this with horrific pessimism or take a glass half-full approach that technology guided by human values might make future wars less inhuman. Everything, they say, is fair in love and war, and that everything might soon include artificial intelligence.