Saturday, April 7, 2012

Artificial Intelligence and Emotions

In an article in Scientific American by the skeptic Michael Shermer, he wrote about the IBM supercomputer Watson. In it he asks the question, "Does Watson know that it won Jeopardy?" Did it take pride in its victory? That is an interesting question that goes to the heart of whether an artificial intelligence will ever become self-aware. Shermer asked the first question of IBM's David Ferucci, who replied, "Yes. Because I told it that it had won."

In truth, there is no way to test whether an artificial intelligence or any entity is self-aware. An AI can be programmed to say that it is self-aware, but that proves nothing. We know that we ourselves are self-aware, and we assume that all humans are self-aware. Actually, the mechanism of our self-awareness has not been determined yet. There are many theories, but no definitive answer.

As far as the other question, "Did it take pride in its victory," pride is an emotion, and like all emotions has value to the entity "feeling" it. An AI or robot that was programmed to simulate the "pride" emotion may know that its methodology for solving whatever spurred the "pride" emotion is sound.

In many science fiction stories and movies, robots are shown as these analytical beings that think only in terms of logic with no emotional content at all. In my mind a robot with these characteristics would be badly designed. In designing an AI the designer should mimic nature. Emotions have a definite function in humans and animals and should have in AIs and robots as well. For one thing simulated emotion helps the AI relate to human beings as was pointed out in the movie, 2001, A Space Odyssey, in a scene where Astronaut Bowmen is being interviewed by the press.

Depending upon the use the robot is put to, different simulated emotions should be a part of its software. "Fear" is useful emotion for any entity to keep it out of danger. "Loyalty" to its master is another emotion that most robots should have. A robot that baby-sits children should simulate "love" for those children. A soldier robot should "hate" the enemy.

I used quotes around the emotions because AI emotions would not be identical to emotions felt by human beings, but would trigger responses similar to the response these emotions trigger in humans and animals.

No comments: