‘Judeo-Christian’ roots will ensure U.S. military AI is used ethically, general says

The Pentagon

A three-star Air Force general said the U.S. military’s approach to artificial intelligence is more ethical than adversaries’ because it is a “Judeo-Christian society,” an assessment that drew scrutiny from experts who say people from a wide range of religious and ethical traditions can work to resolve the dilemmas AI poses.

Lt. Gen. Richard G. Moore Jr. made the comment at a Hudson Institute event Thursday while answering a question about how the Pentagon views autonomous warfare. The Department of Defense has been discussing AI ethics at its highest levels, said Moore, who is the Air Force’s deputy chief of staff for plans and programs.

“Regardless of what your beliefs are, our society is a Judeo-Christian society, and we have a moral compass. Not everybody does,” Moore said. “And there are those that are willing to go for the ends regardless of what means have to be employed.”

The future of AI in war depends on “who plays by the rules of warfare and who doesn’t. There are societies that have a very different foundation than ours,” he said, without naming any specific countries.

The Department of Defense has a religious liberty policy, recognizing that service members “have the right to observe the tenets of their religion, or to observe no religion at all.” The policy broadly allows personnel to express their sincerely held beliefs so long as those actions do not have “an adverse impact on military readiness, unit cohesion, good order and discipline, or health and safety.”

Moore wrote in an emailed statement to The Washington Post that while AI ethics may not be the United States’s sole province, its adversaries are unlikely to act on the same values.

“The foundation of my comments was to explain that the Air Force is not going to allow AI to take actions, nor are we going to take actions on information provided by AI unless we can ensure that the information is in accordance with our values,” Moore wrote. “While this may not be unique to our society, it is not anticipated to be the position of any potential adversary.”

Moore’s comments come as U.S. government officials say they’re working on guidelines for the use of AI in warfare. The State Department issued a declaration on “responsible military use of artificial intelligence and autonomy” in February. The Defense Department adopted standards for the ethical use of AI in 2020.

The ethical issues AI raises, including in war, are common to multiple religious and philosophical traditions, said Alex John London, a professor of ethics and computational technologies at Carnegie Mellon University.

“There’s a lot of work in the ethics space that’s not tied to any religious perspective, that focuses on the importance of valuing human welfare, human autonomy, having social systems that are just and fair,” he said. “The concerns reflected in AI ethics are broader than any single tradition.”

Moore didn’t say whom he was referring to when speaking about U.S. adversaries, but much of the U.S. defense industry has focused on China’s burgeoning AI sector. Technology experts told a House Armed Services subcommittee this month the United States risks falling behind China if it doesn’t invest more quickly in military AI.

The Chinese military’s approach to AI ethics is “different in its roots” than that of the United States, but still mindful of ethical dilemmas, said Mark Metcalf, a lecturer at the University of Virginia and retired U.S. naval officer. Comparing the United States’ and China’s ethics policies is “like apples and oranges” because their history differs, Metcalf added.

Ethics texts in the United States draw from thinkers like Augustine of Hippo, Metcalf said, calling it “a very theistic point of view.” Chinese officials reference “Marxism and Leninism, and the [Communist Party] guides what the ethics is,” he added.

That doesn’t mean China ignores ethical dilemmas when thinking about military AI, though.

China’s People’s Liberation Army wants to use the technology without undercutting Communist Party control, Metcalf wrote in a paper analyzing publicly available statements on China’s approach to military uses of AI. Political goals appear to guide its policies, he said.

“Once you turn over control of a weapons system to an algorithm, worst case, then the party loses control” over it, Metcalf said.