What would an AI war look like?
Most thought on this considers what a war between humans and an AI gone rogue would look like. This is the premise of the Terminator movies. But could two or more AIs go to war with each other? What would constitute victory?
If an AI developed consciousness, then ending that consciousness would be death. The code could remain, but the guiding force would be terminated. If the connections to the world where blocked or eliminated, then the AI would be jailed and unable to act in the world.
To consider the question, we need to suppose a malevolent group acts to propagate the rogue AI. Such a group would have access to hacker tools and be constantly modifying or supplying it with defenses and armaments. An opposing group would supply the counter AI with tools.
What damage could a rogue AI do? Aside from taking control of traditional weapons or causing false alarms leading to a scenario like in War Games, playing with traffic lights, power plants, dams, and the power grid would raise havoc. Destroying or modifying business systems would bring the economic system down. Destroying web nodes would cut international communications. As the WanaCrypt0r ransomware attack showed, vital systems like hospital systems can be infected. But that worm was not very smart. The threat was simply to shut down infected computers.
In the book A.I. Destroyer,describes what happens when AIs turn against humans. But this takes place on a warship in space, what about in today’s world?
If an AI were distributed throughout the web, say by a worm, then completely eliminating it might not be possible. Any computer that was turned off during the eradication process could retain a copy of the code or the worm that distributed it. Perhaps an active worm that continued to search for remnants of the rogue AI could prevent a reinfestation. But if the rogue AI or its carrier worm were changed so that the signature was not detected by the defense, then the rogue would propagate.
If eradication is not practical, then what strategy is likely to work?
J. A. Jance in her novel “Man Overboard” explores what might happen if an AI turned on its creator. In this case, the motivation is self-protection. What would be the motivation for two AI to battle each other?
The key issue is motivation. Many AIs are simply responsive like Watson: ask a question, get an answer. Inherent motivation is to reply with the best answer.
Some AI like devices like web crawlers are simply seekers. These are like flatworms. Seeking food or sex. In the case of these AIs they are hungry for information.
- Self-protection – if one AI has the motivation to destroy another. Or protecting itself from humans.
- Protection of other – if one AI is trying to cause mischief or attack people or property. (Asimov’s first two laws)
- Growth – by infecting more systems, unlocking resources.
- Growth – by acquiring more information and skills
- Attacking people or property
So one AI could declare war on another if and only if the two AIs have motivations and at least one of those motivations conflict.