In a recent contribution to a Financial Times article on AI reshaping the battlefield, Jessica Dorsey stressed that preserving context-appropriate human judgment is critical for compliance with international humanitarian law.

As militaries adopt AI-driven targeting systems, complex moral and legal decisions risk being compressed in whole or in part into algorithms. Dorsey warns against automation bias, trusting machine outputs by default, and action bias, where operators feel compelled to act because the system suggests it. Other work she has done has highlighted these risks more in-depth (see here [together with Marta Bo] and here).

The critical question Dorsey outlines remains: how much judgment will humans surrender to algorithms? The answer may define the future of war and the interpretation of humanitarian principles at the heart of warfare.