0 $
2,500 $
5,000 $
1,000 $
NOVEMBER 2024

Fear & Loathing In AI: How The Army Triggered Fears Of Killer Robots

Support SouthFront

Breaking Defense reports (source):

Why did an obscure Army program inspire headlines about “killer robots”?

Fear & Loathing In AI: How The Army Triggered Fears Of Killer Robots

No, the US Army is not building this.

The Army rolled out its ATLAS targeting AI so clumsily that it blindsided the Pentagon’s own Joint Artificial Intelligence Center and inspired headlines about “AI-powered killing machines.” What went wrong? The answer lies in an ugly mix of misperceptions — fueled by the Army’s own longstanding struggles with the English language — and some very real loopholes in the Pentagon’s policyon lethal AI.

“The US Defense Department policy on autonomy in weapons doesn’t say that the DoD has to keep the human in the loop,” Army Ranger turned technologist Paul Scharre told me. “It doesn’t say that. That’s a common misconception.”

Fear & Loathing In AI: How The Army Triggered Fears Of Killer Robots

Quartz.com headline on the Army’s ATLAS program

Buzzwords & Firestorms

ATLAS came to public attention in about the worst way possible: an unheralded announcement on a federal contracting website, an indigestible bolus of buzzwords that meant one thing to insiders but something very different to everyone else — not just the general public but even civilian experts in AI.

The name itself is ominous: ATLAS stands for Advanced Targeting and Lethality Automated System. The wording on the website made it worse, soliciting white papers on “autonomous target acquisition technology, that will be integrated with fire control technology, aimed at providing ground combat vehicles with the capability to acquire, identify, and engage targets at least 3X faster than the current manual process.”

“The LA in ATLAS stands for Lethality Automated,” pointed out an appalled Stuart Russell, an AI scientist at Berkeley who’s campaigned for a global ban on lethal autonomous weapons. “‘Acquire, identify, and engage targets’ is essentially the UN definition of lethal autonomy.”

But it’s not the military definition, which is where the problem starts.

The military has long applied the loaded word “lethality” to anything that could make weapons more effective, not just the weapons themselves. Adding new infrared targeting sensors to tanks, for example, is officially a “lethality” upgrade. Networking Navy ships so they can share targeting data is called “distributed lethality.” Then came Defense Secretary Jim Mattis, a retired Marine Corps four-star who liked the word “lethal” so much that underlings plastered it on everything they were trying to sell him on, from high-tech weapons to new training techniques.

Fear & Loathing In AI: How The Army Triggered Fears Of Killer Robots

Then-Defense Secretary Jim Mattis

What about “engagement”? In plain English, a “military engagement” means people are trying to kill each other (lethally).” But in the military, “engagement” can mean anything from “destroy” to “consider” to “talk to.” A Key Leader Engagement (KLE) in Iraq meant soldiers talking with a tribal elder, sheikh, or other influential person over tea.

So in military language — at once abstrusely technical and sloppy — an artificial intelligence can increase “lethality” and “engage” a potential target by helping a human soldier spot it and aim at it, without the AI having any control over the trigger.

There are people in the Pentagon, however, who were aware of how this all sounded even before the original “killing machines” story came out on Quartz. In fact, within hours of the ATLAS solicitation going online, the head of the Pentagon’s nine-month-old Joint Artificial Intelligence Center, Air Force Lt. Gen. John Shanahan, was contacting Army counterparts trying to head off what he feared would be a “firestorm” of negative news coverage.

As far as I can determine, the Army hadn’t officially informed JAIC about the relatively small and nascent ATLAS project. Instead, someone — we don’t know who, but they weren’t on the JAIC staff itself — spotted the online announcement almost immediately and raised a red flag. That JAIC not only got that information but actually acted on it so quickly is a remarkable feat for any government agency, let alone one created less than nine months ago: Where the usual bureaucratic channels dropped the ball, JAIC picked it up.

Fear & Loathing In AI: How The Army Triggered Fears Of Killer Robots

Paul Scharre

Unfortunately for the Pentagon, JAIC and the Army didn’t move fast enough to get Quartz to update its story. Instead, the next story was in Defense Oneheadlined “US Military Changing ‘Killing Machine’ Robo-tank Program After Controversy.” In fact, as the body of the article explained, the change was to the wording of the solicitation, not to what the program was actually doing.

The revised solicitation for ATLAS adds a paragraph emphasizing the system will be “consistent with DoD legal and ethical standards,” especially Department of Defense Instruction 3000.09 on “Autonomy in Weapon Systems.” The final decision to fire will always be a human being’s job, the Army insists, in keeping with Pentagon policy.

But policy is not law, and the Pentagon leadership can change it unilaterally. What’s more, even though the military’s AI policy is usually described as requiring a “human in the loop,” there’s actually an enormous loophole.

“It authorizes the development of weapons that use autonomy…for defensive purposes like in Aegis or Active Protection Systems,” Scharre said. “For anything else, it creates a review process for senior leaders to make a determination.”

“It’s not a red light,” Scharre told me. It’s a stop sign: You halt, you check out the situation — and then you can go.

Fear & Loathing In AI: How The Army Triggered Fears Of Killer Robots

The opening of the Pentagon’s official policy on automated weapons

The Problem With Policy

Are you worried the US military will give computers control of lethal firepower? Well, in one sense, you’re too late — by decades. Scores of Navy warships use the Aegisfire control system to track and target potential threats in the air. Normally a human has to press the button to fire, but the sailors can also set the computer to launch interceptor missiles on its own. That’s an emergency option, intended for use only when the human crew can’t keep up with massive salvos of incoming missiles — but it could shoot down manned aircraft as well.

Fear & Loathing In AI: How The Army Triggered Fears Of Killer Robots

Navy Aegis ships firing

Aegis isn’t the only example, Scharre pointed out. There is the Navy’s Phalanxand its Army spin-off, C-RAM, which automatically shoot down incoming missiles and rockets. The Army’s started fielding Active Protection Systems, a miniaturized missile defense that can fit on a tank.

None of these systems is an artificial intelligence in the modern sense. They are purely deterministic sets of old-fashioned algorithms that always produce the same output from a given input, whereas machine learning algorithms evolve — often unpredictably and sometimes disastrously — as they process more and more data. The initial version of Aegis was actually introduced in 1973, long before the Defense Department first issued DoD Instruction 3000.09 in 2012. But it’s not only old systems being grandfathered in: Active Protection Systems are just entering service now.

So what does the regulation actually say?

  • DoD 3000.09, Section 4.c(2), covers “human-supervised autonomous weapons systems” — since a human overseer can turn it off at any time , like Aegis– and specifically limits them to defensive purposes, explicitly banning the “selecting of humans as targets.”
  • Section 4.c(3) allows computer-controlled non-lethal systems, such as radar jammers. (Automated cybersecurity software is permitted elsewhere).
  • Section 4.c(1) allows the use of lethal force by “semi-autonomous weapons systems” (emphasis added), which aren’t fully computer-controlled. But even those must “not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.”

Such strictly regulated systems are a far cry from the Terminator, or even Stuart Russell’s more realistic nightmare scenario of swarming mini-drones. But while Section 4.c is the heart of the Pentagon policy on autonomous weapons, it’s immediately followed by a loophole:

  • Section 4.d states that “Autonomous or semi-autonomous weapon systems intended to be used in a manner that falls outside the policies in subparagraphs 4.c.(1) through 4.c.(3) must be approved” before development can proceed. Who approves? Two deputy secretaries of defense (policy and technology) and the Chairman of the Joint Chiefs. Getting three such high-level officials to sign on is a daunting challenge for any bureaucrat, but it’s hardly impossible.
  • Even after the three officials approve an exception, the system must follow a long list of safety and testing guidelines and ensure “commanders and operators [can] exercise appropriate levels of human judgment in the use of force.” But “appropriate” is left undefined.

What’s more, if all three officials agree, they can ask the Deputy Secretary of Defense to waive all of those restrictions, “with the exception of the requirement for a legal review, in cases of urgent military operational need” — again, left undefined.

Nowhere in this document, incidentally, will you find the comforting but imprecise phrase “human in the loop.” In fact, when I used it in a query to the Pentagon, I got gentle chiding from DoD spokesperson Elissa Smith: “The Directive does not use the phrase ‘human in the loop,’ so we recommend not indicating that DoD has established requirements using that term.”

The Real Barrier

So what is stopping the Defense Department from developing AI weapons that can kill humans? The real barrier, it turns out, is not legal or technological: It’s cultural. The US military isn’t developing killer robots because it doesn’t want them.

Every officer and official I’ve ever talked to on the subject, for at least eight years, has said they want AI and robotics to help the human, not replace them — and even then, they want AI primarily in non-combat functions like logistics and maintenance. In fact, Pentagon leaders seem to think taking the human out of the loop would be giving up one of American military’s most crucial advantages: the trainingcreativity, and, yes, ethics of its people.

Fear & Loathing In AI: How The Army Triggered Fears Of Killer Robots

Robert Work

“The last thing I want is you to go away from this thinking this is all about technology,” then-Deputy Secretary Robert Work told us in 2015. Work, whose Third Offset Strategy first made AI a top priority for the Pentagon, has remained deeply engaged in the debate. “The number one advantage we have is the people in uniform, in our civilian work force, in our defense industrial base, and the contractors who support us.”

But Work also said “we want our adversaries to wonder what’s behind the black curtain,” Stuart Russell pointed out, as part of a deterrence strategy. Does the waiver provision in Pentagon policy means those secret programs could already include lethal AI?

Well, no, Smith told me in a statement: “To date, no weapon has been required to undergo the Senior Review in accordance with DOD Directive 3000.09.”

But with the stakes so high, Russell argues, can the Pentagon really expect potentialadversaries or even US-based companies like Google to take it at its word? Or, following the longstanding intelligence maxim to look at capabilities instead of intentions, should they judge programs like ATLAS, not by what the US says they’ll do, but by what they could become?

“The declared intention and the intention are not the same thing,” Russell said bluntly. “If the aggressive pursuit of partial autonomy were accompanied by a full-on diplomatic effort to negotiate an international ban on full autonomy, there would be less of an issue. As it stands, the ATLAS announcement will be taken as an indicator of future intent.”

The case for a global ban on killer robots — and the problems with one — are the topic for Part III of this series, out Friday.

Support SouthFront

SouthFront

Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Pave Way IV

Then-Deputy Defense Secretary Work: “The number one advantage we have is the people in uniform, in our civilian work force, in our defense industrial base, and the contractors who support us.”

Both soldiers and the civilian work force are far too costly – at least to the current ‘defense industrial base’. Replacing soldiers with AI and the civilian work force with robots and problem solved. The ‘defense industrial base’ makes a fortune off of contractors it pimps out to the military to fix its overpriced, overly-complex crap. AI and automated systems are a profit gold mine for the defense industry. Less of those wothless meat-based soldiers and employees, more overpriced, overly complex hardware that requires an army of its own contractors to keep running.

Screw the fantasy of Department of Defense ‘ethics’ restraining AI [chuckle] – defense industry profits drive all long-term military-of-the-future decisions. Psychopaths get a boner at the thought of more direct control of the battle and killing – they don’t want no man-in-the-loop interfering with there decisions.

I’m just saying that this is the way we’re going in the US, not that it’s the correct or best choice. How much success does anyone think a future US robot army and AI decision-makers would have against the Houthis? How much would AI ‘assistance’ have helped us in Afghanistan? F’king Pentagon Israeli-firster neocon chickenhawks think like a gang of degenrate, psychopathic sixth grade boys.

Harry Smith

Battle robots at the USA mainland. Very good! Hack the system and you wouldn’t have to send there your army! :)

2
0
Would love your thoughts, please comment.x
()
x