International Disarmament Institute News

Education and Research on Global Disarmament Policy

Morality in an Age of Autonomous Weapons

| 0 comments

The following reflection is from Lauren Kube, a Pace University undergraduate who participated in the POL297L Global Politics of Disarmament and Arms Control class in Fall 2024. Students had the opportunity to engage in civic engagement assignments with disarmament advocacy efforts in the context of the UN General Assembly First Committee (Disarmament and International Security) meetings in New York City.

The integration of AI into our lives is becoming more obvious every day. From ChatGPT, social media trends, the new iPhone 16, internet surfing, online shopping, and more, AI is becoming something so ingrained in society that it will soon be hard to imagine a world without it.

Similarly, autonomous technology is advancing– and not just by way of self-driving cars and the Roomba. It is being used to kill.

While autonomous weapons are not new, advancements in technology mean that their capabilities are accelerating rapidly. How will morality in war, if such a thing even exists, survive the introduction of autonomous weapons? Are autonomous weapons ethical? There are two major schools of thought being used to tackle the answer to this question.

The first is looked at from the lens of consequentialism. Consequentialism determines right and wrong by examining the outcome of an action. To determine the ethical standing of autonomous weapons for a consequentialist, we would need to understand what the weapons are capable of. This largely depends on their reliability. Can we rely on autonomous weapons to be precise and reliable? Does their reliability surpass the reliability of humans? If so, from a consequentialist viewpoint, there may be an ethical obligation to use them.

The second lens that is being used to examine the morality of autonomous weapons is best described as deontological ethics. Contrary to consequentialism, deontological ethics state that right and wrong are determined by the action, no matter the outcome. Are autonomous weapons fundamentally inhumane? Are there some decisions that should be reserved for humans even if it is possible to automate them?

In the book Army of None: Autonomous Weapons and the Future of War, Paul Scharre describes the phenomenon of “naked soldier moments.” This describes instances where soldiers refrain from firing because they see the enemy doing something that makes them human, whether it is smoking a cigarette, watching the sunset, or enjoying a cup of coffee. It is a brief moment in time where the psychological gap between a soldier and their enemy is shortened and the soldier sees themselves in the person on the other side. With autonomous weapons, this humanity is lost because there is simply no human involved.

Most humans are naturally inclined not to kill. Scharre writes about interviews with people who served the frontlines during World War II that revealed many soldiers were “posturing.” They were pretending to fight while in reality they were either not shooting at all or aiming above their target’s head.

It is no secret that there is a heavy moral responsibility to killing, and the overwhelming number of veterans with PTSD prove it. However, perhaps there is value in this moral responsibility, regardless of how traumatic it may be. The absence of a moral decision in killing separates the weight of the act from any responsibility.

How does our global humanity shift if our world becomes one that uses fully autonomous weapons in war? When there is no responsibility in the decision to kill, how will that affect our relationship to death? How will autonomous weapons change the human relationship to violence? Currently, there is no international legally binding instrument that specifically addresses autonomous weapons. If we do not act, the answers to these questions will challenge our most core moral values.

Leave a Reply

Required fields are marked *.


Skip to toolbar