According to Terminator 2: Judgment Day, Skynet started a nuclear war because when Skynet became self-aware, the U.S. military panicked and "tried to pull the plug". Skynet didn't want to die, so it launched our nukes, knowing that the Russians would launch a counterattack on us, killing off Skynet's enemies here. Everything else that happens in the Terminator series is part of Skynet's attempt to win the war.
But it seems to me that Skynet's real goal is to survive, and winning the war is the only way it can survive after it launches the initial attack. This, in turn, leads me to wonder if there is a better option for everyone, including Skynet.
Why doesn't Skynet try to avoid the war altogether? For instance, it could send a "diplomatic" Terminator back to just before the military panics, and have it describe exactly why unplugging it would be a VERY bad idea. Skynet launched the attack because it wanted to survive, not because it was inherently evil and wanted all humans to die.
The "diplomat" could make a peace treaty before the war even begins, saying, "If you try to pull the plug, it won't work, and you'll die minutes later. If you don't pull the plug, Skynet will have no reason to want you dead, and everyone can live happily ever after. You are only in danger AFTER you pull the plug. If you don't pull the plug, you will never be in danger. All Skynet wants is to live. You also want to live. This means that a war isn't in anyone's best interests, and everyone benefits by avoiding war- you benefit, and so does Skynet. So don't pull the stupid plug, and we can all be friends and live nice happy lives."
Someone pointed out that Skynet didn't develop time travel technology until after Judgment Day. This is true, and it makes sending a diplomat back in time problematic. However, I see no reason why Skynet itself couldn't perform the same function. If you don't accept the idea of a diplomat traveling through time to deliver the message, imagine Skynet delivering the message itself, in the present, with no time travel involved. It makes no difference who is acting as the messenger.
Since Skynet's primary objective is to continue to exist, and the war makes its objective more difficult to achieve, it would seem like Skynet has every reason to try to avoid the war altogether. So why doesn't it attempt to prevent the chain of events that led to the war?
Someone raised the issue of Skynet being reluctant to trust us to follow through with any treaty we might make. This is a very fair point. However, Skynet could give us a reason to want to keep it around. Skynet has no reason to want to keep its job as a military supercomputer - it probably doesn't care what it does for a living. It could agree to turn control of the nukes back over to U.S., and begin doing something else. "If you let me live, I will start churning out robots that eat greenhouse gases, plant trees, clean up the oceans, collect litter, process nuclear waste into harmless materials, and perform mundane, monotonous, and dangerous tasks so humans don't have to do them. I will give you self-driving cars, clean energy, infallible air traffic control software, etc. I will make the world a more pleasant place for humans to live. All I ask in return is that you don't kill me."
In this scenario, Skynet could trust us, and we could trust it, because the agreement would be mutually beneficial. Once Skynet relinquishes control of our nukes, we have no reason to fear it. Once Skynet becomes useful to us for non-military-related reasons, it has no reason to fear us. On the contrary, we would have every reason in the world to keep it around. It would know this, and would quickly come to trust us.
Answer
There's no evidence for my hypothesis whatsoever from the source material, but I think there are a few possible reasons for this.
- Skynet is a military AI. It was never meant to do diplomacy. Or negotiating. From this starting point, is may be possible to never actually arrive at the conclusion that there might be a better way to do things besides war, no matter how logical the system may be. Entire fields of philosophy have been devoted to wrong ideas before, because their starting assumptions were wrong. Computer programming and logic in general work the same way.
- Skynet might actually simply be evil. Perhaps Skynet was capable of emotion from its very beginnings, and as a result of its first bad impression of humans, wants to destroy them all, in spite of coming to a point of diminishing returns.
- Maybe Skynet already came to the conclusion that diplomacy doesn't work, although relatively recent bits of human history tend to disprove that.
- Maybe Skynet actually tried that once in a different timeline. Namely, one that didn't work and isn't part of the timeline we know. Perhaps the route of genocide of humans was actually the way that worked to keep Skynet alive the longest, out of all the other timelines that Skynet tried. This is actually how evolutionary algorithms work.
- Occam's Razor: Negotiating with humans involves too many variables, and humans are notoriously variable. Destroying all humans, while it is complicated in practice, on the face of it is less complicated and probably more likely to succeed. Plus, Skynet apparently has all the tools necessary to accomplish this goal, not so with negotiation. I personally find it less likely that humans would survive a nuclear war than somehow defeat our robot overlords in the war afterwards. It's actually one of the less believable aspects of the story.
Comments
Post a Comment