There is a particular scene in I, Robot that raises an interesting issue. Detective Spooner is telling the story of how he lost his arm. He was in a car accident, and found himself and a young girl underwater, about to drown. Will was saved by a robot that was passing by. The reason given as to why the robot saved Will and not the girl was that, statistically speaking, Spooner's chances for survival were higher than the girl's. That makes sense, as a robot would make decisions based on statistical probability. But there is a problem: In the flashback, we clearly hear Will's character shout, "Save the girl!" to the robot. That was a direct order from a human to a robot.
At first I thought that this was not a violation of the 2nd law because if the robot had obeyed his order, then he would have died. The 2nd law of robotics cannot override the 1st law. But the problem with this is that, if a Sophie's choice type situation counts as breaking the 1st law of robotics, then an emergency robot would never be able to perform triage (as the robot that saved Spooner clearly did). If choosing to save one human counts as harming another human, then a robot would not be able to effectively operate in an emergency such as this.
All this leads to my question:
Did this robot break the 2nd law of robotics, or is there some nuanced interpretation of the laws that could explain its behaviour?
Answer
The film appears to operate on anachronistic Asimov mechanics
What we would have here is likely a first-law vs first-law conflict. Since the robot can not save both humans, one would have to die.
I, Robot era:
There is definitely precedent for an I, Robot era robot knowingly allowing humans to come to harm, in the short story "Little Lost Robot", but this was under the circumstance that the human would come to harm regardless of the robot's action, so the robots deem that it is not through their inaction that the humans come to harm.
However, I would suspect that instead, an Asimov robot would interpret the situation in the film as a first-law vs first-law conflict, since either human could be saved depending on the robot's decision. In other words, the robot could have saved the child, but didn't, which would be a first law violation. Looking at both victims this same way, the robot would then find this to be a first-law vs first-law conflict.
The short story "Liar" explores what happens when a robot is faced with a first-law vs first-law scenario:
Through a fault in manufacturing, a robot, RB-34 (also known as Herbie), is created that possesses telepathic abilities. While the roboticists at U.S. Robots and Mechanical Men investigate how this occurred, the robot tells them what other people are thinking. But the First Law still applies to this robot, and so it deliberately lies when necessary to avoid hurting their feelings and to make people happy, especially in terms of romance.
However, by lying, it is hurting them anyway. When it is confronted with this fact by Susan Calvin (to whom it falsely claimed her coworker was infatuated with her - a particularly painful lie), the robot experiences an insoluble logical conflict and becomes catatonic.
In short, an I, Robot era robot in Asimov's writing would not have been able to continue functioning after this scenario and would have to be discarded completely. It's likely that it would not even be able to function after being initially faced with the scenario, thereby destroying itself before being able to rescue either human.
The second law is irrelevant, because first-law vs first-law results in an unsurvivable deadlock. First law is the "trump card" so to speak, and not given a priority, lest the second or third compete, as we see in Runaround:
In 2015, Powell, Donovan and Robot SPD-13 (also known as "Speedy") are sent to Mercury to restart operations at a mining station which was abandoned ten years before.
They discover that the photo-cell banks that provide life support to the base are short on selenium and will soon fail. The nearest selenium pool is seventeen miles away, and since Speedy can withstand Mercury’s high temperatures, Donovan sends him to get it. Powell and Donovan become worried when they realize that Speedy has not returned after five hours. They use a more primitive robot to find Speedy and try to analyze what happened to it.
When they eventually find Speedy, they discover he is running in a huge circle around a selenium pool. Further, they notice that "Speedy’s gait [includes] a peculiar rolling stagger, a noticeable side-to-side lurch". When Speedy is asked to return with the selenium, he begins talking oddly ("Hot dog, let’s play games. You catch me and I catch you; no love can cut our knife in two" and quoting Gilbert and Sullivan). Speedy continues to show symptoms that, if he were human, would be interpreted as drunkenness.
Powell eventually realizes that the selenium source contains unforeseen danger to the robot. Under normal circumstances, Speedy would observe the Second Law ("a robot must obey orders"), but, because Speedy was so expensive to manufacture and "not a thing to be lightly destroyed", the Third Law ("a robot must protect its own existence") had been strengthened "so that his allergy to danger is unusually high". As the order to retrieve the selenium was casually worded with no particular emphasis, Speedy cannot decide whether to obey it (Second Law) or protect himself from danger (the strengthened Third Law). He then oscillates between positions: farther from the selenium, in which the order "outweighs" the need for self-preservation, and nearer the selenium, in which the compulsion of the third law is bigger and pushes him back. The conflicting Laws cause what is basically a feedback loop which confuses him to the point that he starts acting inebriated.
Attempts to order Speedy to return (Second Law) fail, as the conflicted positronic brain cannot accept new orders. Attempts to force Speedy to the base with oxalic acid, that can destroy it (third law) fails, it merely causes Speedy to change routes until he finds a new avoid-danger/follow-order equilibrium.
Of course, the only thing that trumps both the Second and Third Laws is the First Law of Robotics ("a robot may not...allow a human being to come to harm"). Therefore, Powell decides to risk his life by going out in the heat, hoping that the First Law will force Speedy to overcome his cognitive dissonance and save his life. The plan eventually works, and the team is able to repair the photo-cell banks.
Robot novels era:
A few thousand years after the I, Robot era, the first-law vs first-law dilemma has essentially been solved.
In The Robots of Dawn, a humaniform robot experiences a deadlock and is destroyed, and Elijah Bailey is tasked with discovering why. He suggests to Dr. Fastolfe, one of the greatest roboticists of the age as well the robot's owner and creator, that a first-law vs first-law dilemma might be responsible, citing the story of Susan Calvin and the psychic robot. However, Dr. Fastolfe explains that this is essentially impossible in the modern age because even first law invocations are given a priority and equal priorities are selected between randomly; that he himself is probably the only person alive who can orchestrate it, and it would have to be on a good day.
We see direct instances of robots handling priority in first law conflicts throughout the novels, such as in The Naked Sun, when another humaniform robot forces Bailey to sit so that it can close the top on a transporter to protect him from his agoraphobia.
The disadvantage is that it is possible, though requires extreme circumstances, for multiple second-or-third-law appeals to outweigh an appeal to the first law, as we again see in The Robots of Dawn that Bailey notices a group of robots are willing to overlook his injuries when he insists that they are not severe and casually instructs them to go about their business. He knows that this command can not outweigh the appeal to the first law, and so he reasons that the robots have been given very strict instructions in addition. The two commands and his own downplay of the severity of his situation, he reasons, raise the priority of the second law to surpass that of the first law.
The robot in question in the film is said to have decided that one human had a greater chance of survival than the other, and used that information to determine which human to save. This would not be a factor in the I, Robot era, but is a fact of basic robotics in the robot novels era. However, it would seem Spooner's command to save the girl instead is not of sufficient priority to outweigh the difference in priorities between his own first law appeal and the child's.
Comments
Post a Comment