Rules on ‘Killer AI’ Vital as Practical Usage Nears

Russian Defense Ministry
A Russian-made Lancet self-destructing drone and its operator

WASHINGTON / BEIJING — It is of paramount importance to formulate rules on lethal autonomous weapons systems (LAWS) — also referred to as “killer AI” — against a background in which conflict continues to rage around the world, including in Ukraine.

The United Nations on Friday adopted a resolution on the creation of rules for these weapons, which leverage artificial intelligence to identify and kill human targets without humans being involved in the decision-making process. Positions on LAWS vary among countries.

‘Testing ground’

On Dec. 16, a small Russian military drone was spotted circling above a hospital in the southern Ukrainian state of Kherson.

Video footage released by Reuters depicts a drone approaching the hospital, followed by a loud explosion and the sound of breaking glass. A doctor was injured in the attack.

AI-equipped drones have been used in Ukraine for reconnaissance and attack purposes since February 2022, when Russia began its aggression against the country.

The Russian military has employed Iranian-made self-destructing drones, which have caused widespread damage to civilians and public facilities.

Former U.S. Defense Department official Paul Scharre said AI detects humans based on body temperature, moisture content and advanced image analysis, making it difficult for people to escape an attack after being targeted.

The Ukrainian military, too, has apparently been using AI-equipped drones. Among AI-equipped weapons, drones and unmanned boats that apply partially autonomous technology have existed for some time, making battle zones in Ukraine a “testing ground.” Massive strides have been made in the technology behind AI weapons.

Although the use of LAWS has not been confirmed, the technology is said to have reached a stage where it can be put to practical use.

Military balance

The United States this year called for an international code on the military use of AI. Usage of LAWS appears to be looming, and its uncontrolled spread could alter the global military balance, which is premised on human intervention.

The introduction of fully autonomous systems could increase the risk of misdirected attacks if AI were to “run amok.” If an erroneous LAWS-based attack were to occur involving such major powers as the United States, China and Russia, it could potentially trigger a world war.

LAWS has not yet been put into practical use, so international definitions surrounding the technology have not been established.

In a document submitted to the United Nations last July, China asked for a ban on weapons systems that were lethal, fully autonomous and capable of continuing to evolve beyond human expectations through independent learning, among other characteristics.

“China is unhappy with the international rules that have been established under the leadership of the United States and Europe,” said Masafumi Iida of the National Institute for Defense Studies. “China may strengthen its efforts to attract emerging and developing countries in order to gain an advantage in terms of rulemaking.”

Developments accelerating

In August, the U.S. Defense Department announced a new project that it calls the “Replicator Initiative,” in which thousands of low-cost, AI-based systems will be deployed across multiple domains — including air and sea — over the next two years.

The main aim of the initiative is to undermine China’s anti-access, area-denial (A2/AD) system, which could prevent U.S. forces from entering the first island chain connecting the Kyushu region to the Nansei Islands and Taiwan, in the event of a Taiwan contingency.

U.S. Deputy Defense Secretary Kathleen Hicks, who leads the initiative, said in September that the department was “beginning with accelerating the scaling of all-domain attritable autonomous systems.

“They can help a determined defender stop a larger aggressor from achieving its objectives, put fewer people in the line of fire,” Hicks added.

AI-based weapons are certain to spread rapidly to help reduce the risk to soldiers’ lives and cut manpower costs, but there are fears that a development race could gather pace without international rules.

Is quick consensus possible?

Many developing countries are behind the international pace in terms of AI technology and fear conflict could arise from the use of LAWS.

At a meeting of the U.N. General Assembly on Friday, over 150 countries — the overwhelming majority — voted to adopt a resolution to promote international rulemaking on LAWS.

U.N. Secretary General Antonio Guterres has called for the creation of a framework to legally ban LAWS by 2026, but it is unclear if this will happen.

A U.N. discussion to regulate LAWS began in 2014. Countries in Africa, Latin America and elsewhere have remained cautious about the development of AI weapons, while the United States, China and Russia favor their development, causing the talks to stall. Japan, meanwhile, would rather LAWS not be developed, from an ethical standpoint.

Based on Friday’s resolution, Guterres intends to submit a report summarizing the views of each country at the next session in September in hopes of kick-starting negotiations aimed at creating a treaty for regulation, according to a U.N. diplomatic source.

Peter Asaro, an associate professor of media studies at The New School university in the United States, said: “I think this is the most significant step on the regulation of LAWS … in terms of preventing unregulated development and practical application.”

Asaro said LAWS, which could allow AI to make a decision to take a human life, will pose serious challenges, legally and morally.

Russia and India currently oppose regulations for LAWS. Pessimism persists as to whether the U.N. — which seems to be irreconcilably divided over Russia’s aggression in Ukraine and the fighting between Israel and the Islamist group Hamas — will be able to reach consensus over LAWS regulations quickly.