Over the summer of 2025, RAW researchers contributed to public debate and academic discourse on the evolving role of artificial intelligence in warfare, highlighting its legal, ethical, and operational consequences.

Lauren Gould appeared on Dutch-language podcasts Focus and De Nacht Van de NTR Wetenschap to unpack how AI is reshaping military strategy. She discussed the increasing reliance on systems like Lavender, Gospel, and Where’s Daddy to automate the identification and targeting of suspected militants in Gaza, often based on behavioral patterns rather than confirmed identities.  She emphasized that these technologies are part of a broader military-industrial-commercial complex, involving companies like Google, Amazon, Microsoft, and OpenAI. Projects such as Nimbus and Maven reflect a growing alignment between Silicon Valley and Western militaries, accelerating the kill-chain and reshaping battlefield dynamics across Ukraine, Gaza, and the broader Middle East. 

Gould describes the many ways this shift risks misidentifying civilians in densely populated areas based on faulty data, wrong translations and broad categorizations of what signifies combatant behaviour. Gould also raised concerns about the lack of a clear feedback loop, calling for more research on how these models learn from mistakes and how verification processes are conducted.

Jessica Dorsey also contributed across media channels. In a live interview on Al Jazeera’s Newshour, she responded to a drone strike on civilians in Gaza, challenging the framing of such incidents as “errors.” She argued that these reflect a systemic military doctrine that disregards core principles of international humanitarian law (IHL), and warned that internal investigations lacking accountability risk setting a dangerous global precedent.

Moreover, in a co-authored op-ed in NRC, Dorsey and Marta Bo warned of the growing influence of AI-enabled Decision Support Systems (AI-DSS) in determining how, when, and where force is used. They called for urgent action: ensuring human oversight, demanding transparency, and developing global norms and rules starting at the UN level.

Laszlo Steinwärder, in an interview with De Morgen, discussed Palantir’s $10 billion deal with the US Army, which introduces a new model for short-term software acquisitions. He highlighted how this deal opens the door to deeper entanglements between the commercial tech sector and the US military, raising concerns about the ideological and opaque nature of Palantir’s technology and its growing centrality in military digital infrastructure.

Marijn Hoijtink, in an interview with Radio1, tackled the growing importance of military research and funding at Flemish universities. Speaking with host Lode Roels, she offered a critical view of the ethical and political risks involved, including implications for open and independent academic research.

Together, these contributions underscore the urgency of critically examining not just how AI is used in war, but who builds it, who deploys it, what impact it has and how its risks are justified. As war becomes more remote, experimental, and data-driven, RAW researchers continue to challenge the narratives and infrastructures that normalize these shifts.

To explore these issues further:

Listen to Gould’s podcast appearances here:
On Focus
On De Nacht Van

Read Dorsey’s co-authored op-ed here.

Read Steinwärder’s interview here.

Listen to Hoijtink’s interview here.