Chapter 16

Responsibility in Machine-Age Warfare

Lukas Filler

“Responsibility is a unique concept…You may delegate it, but it is still with you. You may disclaim it, but you cannot divest yourself of it.”

— Admiral Hyman G. Rickover
Testimony on Radiation Safety and Regulation (1961)

Introduction: Morality Under Machine Pressure

Contemporary debates about military artificial intelligence (AI) are often pulled toward familiar but unhelpful extremes. At one end are speculative fears of runaway machines and apocalyptic scenarios in which super-intelligent systems turn against their creators. At the other end lies a technocratic confidence that war can be reduced to an optimization problem, solved through faster processing and better data. Both perspectives distract from the more immediate and consequential danger facing modern militaries. The most pressing ethical risk is not that machines will acquire independent moral standing, but that human commanders will allow these systems to become a pretext for surrendering their own responsibility for judgment.

This drift toward automation is not occurring in isolation. It is driven by acute strategic pressure, particularly the perception of intensifying competition with technologically capable adversaries. China’s pursuit of what it calls “intelligentized warfare”—the integration of AI, data fusion, and cognitive operations to compress decision cycles beyond human reaction times—has sharpened fears that hesitation itself may become a strategic liability. For U.S. leaders, the fear of falling behind an adversary’s decision tempo increasingly frames speed not as a tradeoff against control, but as a necessity for survival. There is strong potential for this to create a pattern of accelerated deployment in which systems are fielded before their limits are fully understood, and where the urgency to keep pace outstrips the rigor of testing, validation, and ethical reflection.

Yet much of the ethical anxiety surrounding military AI is misplaced. While the character of war—its tempo, reach, and technical sophistication—is clearly changing,[x] the nature of war as a violent human endeavor has not. The dilemmas raised by AI-enabled warfare are not wholly new. They are an intensification of a long-standing trajectory in which technology has progressively distanced those who wield force from those who bear its consequences. From long-range artillery to strategic bombing and standoff precision weapons, modern militaries have repeatedly sought ways to extend lethality while reducing physical and psychological proximity to harm. Artificial intelligence accelerates this trend, but it does not fundamentally alter the moral structure that governs it.

Much of the current debate focuses on the risk of an “accountability vacuum,” suggesting that responsibility erodes as systems grow more autonomous and opaque. This chapter argues the opposite. Any accountability gap created by AI is not inherent to the technology; it is a result of deliberate or negligent institutional choice. The actual danger lies not only in algorithmic errors or limited transparency, but in the temptation to treat machine outputs as moral insulation. By interposing layers of computation, confidence scores, and procedural validation between the decision-maker and the use of lethal force, AI can function as a kind of moral car wash—allowing leaders to experience violence as technically authorized rather than personally owned.

The response to the challenge is not to develop ethical frameworks for machines, not to treat legal compliance as a substitute for judgment, and not to prohibit military AI altogether. It is to reassert, deliberately and explicitly, the inalienable responsibilities of human command. Throughout this chapter, responsibility refers not merely to legal liability, but to the commander’s obligation to exercise judgment and remain accountable for decisions to use force. Leaders must resist the impulse to treat AI as a substitute for conscience and instead confront the enduring reality that decisions to harm, even when justified, carry an unavoidable moral remainder. As Michael Walzer argued in his account of “dirty hands,” commanders may authorize violence to prevent greater evil, but they cannot escape responsibility for the moral remainder that follows. The central challenge of machine-age warfare is ensuring that this burden is neither diluted nor displaced—that it remains firmly where it belongs, with the humans who decide when and how force is used.

This dilemma cuts both ways. If delegating lethal decision-making to machines risks abdication of responsibility, refusing to employ AI, where it could reasonably reduce civilian harm or protect friendly forces, may also constitute a form of dirty hands. A commander who knowingly selects a more predictable and reliable, but slower or less discriminating human process—when more precise, rigorously constrained but less proven AI-enabled options are available—does not preserve moral innocence by abstention. They accept foreseeable and avoidable harm for the sake of moral safety. In such cases, restraint itself becomes ethically costly. The inescapable tragedy of command persists in AI warfare. Leaders do not get a choice between clean and unclean options because every available path results in harms that carry moral residue.