In their recent article War as a Testing Ground for Military AI for Samenleving & Politiek, Linde Arentze, Lauren Gould and Marijn Hoijtink show how Ukraine and Gaza have become real-time laboratories for military AI. Far from the promise of “smarter” or more “precise” warfare, both conflicts demonstrate how algorithmic systems accelerate targeting, widen the pool of people labelled as threats and diminish the space for human judgement. The battlefield is increasingly shaped by data-fusion platforms that integrate satellite images, drone footage, communications metadata and behavioural signals. Private companies such as Palantir now form part of this decision-making infrastructure, blurring the line between military and commercial power.

Lauren Gould underscores these risks in her recent WNL interview, warning that AI-supported targeting regularly misfires. She points out how behaviour-based models, such as flagging someone as suspicious simply because they changed their SIM card, can wrongly classify civilians as militants. Instead of correcting the chaos of war, these systems import their own errors and biases into life-and-death decisions. As she notes, the idea that AI brings cleaner, more controlled conflict “is simply not what we see happening in practice.”

Across Gaza and Ukraine the consequence is a sharp increase in the speed and scale of attacks, while transparency and accountability shrink. Civilians become legible to militaries not as people but as patterns, probabilities and datapoints. When human review is reduced to seconds, “meaningful human control” becomes a formality rather than a safeguard. What emerges is a new type of warfare: high-tempo, data-driven and deeply opaque, where algorithmic outputs guide violence in ways that escape clear oversight.

For RAW, these developments underline a central concern. When wars become testing grounds for military AI, innovation displaces responsibility, and human lives bear the cost.