How will intelligent robots interact with humans? In this article, Aaron Saenz discusses the problems that robotic intelligence brings. And although he didn't have an answer, I think I do.

Isaac Asimov famously invented the Three Laws of Robotics so he could advance robots beyond Frankenstein-like stories. The laws were supposed to protect and bring peaceful coexistence with humans. These things concern us because robots can be faster, stronger, and more persistent than we are.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov's stories were largely an exploration of the how these laws played out. However, for any robot to apply these laws, it must be intelligent, and we humans don't know how to create intelligent entities.

We do know how to create dumb robots (e.g. our clothes-washing machines). These can be safely controlled by design by intelligent humans. The values of the designer are realized in the product. Thus, a product from a careless designer can be dangerous to people. It usually takes experience and deep, caring thought to design good dumb products. (For example: cars are much safer now than they were 50 years ago.)

However, if our robots were intelligent, what would guide their behavior? The Three Laws is one attempt to answer this. Like all laws, however, they end up with loopholes from incomplete, inapplicable, or indeterminate coverage. A law is simply a value encoded as legislation. So the better question should be: What values should drive robots?

Saenz thinks this problem of behavior comes from intelligence itself: "you can't predict what intelligence will do, nor how it will evolve." I would agree with that statement; however, predictability isn't the real problem. Motivations for good behaviors is the problem to solve.

Saenz is correct about intelligence in that it finds the loopholes in the laws it doesn't want to obey. If robotic intelligence is faster or better in some way than ours, then it will be better at finding advantageous loopholes. However, if that intelligence were motivated by good values, most of the problems disappear. It would always try to do what is right. Obeying laws at best only avoids doing wrong.

Ultimately, this is a moral problem. We need our robots to be moral so that they won't exploit humans. The best moral system is based on a value of having a value system of sacrificial love for the benefit of others. This one value will guide the behavior of any intelligent entity to do good for those being loved, and not exploit or damage them. This is the value we need to teach our robots.

Humans also need morality to help us in learning. We face a continuous stream of information containing conflicting social ideas. Our moral systems are important for us to detect which items are truth. Detected truth is used to build up structured knowledge of the world. And intelligence is tightly connected to this body of knowledge.

However, will intelligent robots have the ability to choose for themselves? Humans usually chose value systems to benefit themselves; and their selfish choices are the cause of most of the evil in the world. If robots are not allowed to choose, wouldn't they simply become a new race of slaves for their human masters? That would be immoral.

Would it possible to teach every one of our robots this value of sacrificial love for others such that they incorporate it as their own?