Saturday, July 27, 2024
HomeMoreHere's how an AI-human conflict would truly end.

Here’s how an AI-human conflict would truly end.

There’s no reason to be concerned about a robot revolt. We can always just unplug it, right?? RIGHT?

New science fiction film The Creator envisions a future in which mankind faces out against artificial intelligence (AI). Certainly not a new idea for science fiction, but the essential distinction here – as opposed to, say, The Terminator – is that it arrives at a moment when the prospect is beginning to feel more like scientific reality than fantasy.

In recent months, for example, there have been various warnings concerning the ‘existential threat’ presented by AI. Not only may it one day write this column better than I do (unlikely, I’m sure), but it could also lead to terrifying breakthroughs in combat – advancements that could spiral out of control.

The most apparent fear is a future in which AI is utilised to control armaments automatically instead of humans. Paul Scharre (Scharre is the Centre for a New American Security’s Executive Vice President and Director of Studies. He has authored many books on artificial intelligence and warfare, and TIME magazine named him one of the 100 most significant persons in AI in 2023.), author of Four Battlegrounds: Power in the Age of AI and vice president of the Centre for a New American Security, cites DARPA’s (the Defence Advanced Research Projects Agency) AlphaDogfight challenge, which pitted a human pilot against an AI.

Scharre reveals that the AI not only crushed the pilot 15 to zero, but it also performed movements that humans cannot; notably, incredibly high-precision, split-second gunfire.

However, giving AI the ability to make life-or-death judgements poses unsettling considerations. For example, what if an AI made a mistake and accidently murdered a civilian? “That would be a war crime,” Scharre argues. “And the difficulty is that there might not be anyone to hold accountable.”

However, in the near future, the most likely application of AI in combat will be in tactics and analysis. “AI can help process information better and make militaries more efficient,” Scharre said.

Because the military is a brutally competitive setting, I believe militaries will feel driven to delegate increasing amounts of decision-making to AI.

If there is an advantage to be won and your opponent takes it but you do not, you are at a significant disadvantage.” According to Scharre, this might lead to an AI arms race similar to the one for nuclear weapons.

“Some Chinese scholars have hypothesised about a singularity on the battlefield,” he said. “[That’s the] point when the pace of AI-driven decision-making eclipses the speed of a human’s ability to understand and humans effectively have to turn over the keys to autonomous systems to make decisions on the battlefield.”

Of course, in such a scenario, it’s not impossible for us to lose control of that AI, or for it to turn against us. As a result, it is US policy to keep people informed of any decision to deploy nuclear weapons.

“But we haven’t seen anything similar from countries like Russia and China,” Scharre says. “So, it’s an area where there’s valid concern.” Scharre is not enthusiastic about our prospects if the worst were to happen and an AI declared war.

“I mean, could chimps win a war against humans?” he jokes. “Top chessplaying AIs aren’t just as good as grandmasters; they can’t even compete with them.” And that occurred really rapidly.

That was not the situation even five years ago. “We’re developing increasingly powerful artificial intelligence systems that we don’t understand and can’t control and deploying them in the real world.” I believe that if we are able to create robots that are smarter than people, we will face a slew of issues.”

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments