One of the most popular themes in science fiction is the
robot or computer who develops consciousness, usually to the detriment of
humanity, but not always. I've written such stories myself. In For the Love
of Kumiko a man falls in love with a sentient android who eventually wishes
to be free. The Isaac Project is about the development of a sentient
robot. A well known example by another author are the robot novels by Isaac
Asimov. Recently the SyFy channel broadcast a series called Caprica. There are many more examples, too numerous to
mention.
In a recent Scientific American article by Christof Koch and
Giulo Tomoni, the question is asked, how do we test for consciousness. In other
words, how will we know when we have developed a sentient robot or computer? They
point out that computers today can do amazing things. Some examples are Big
Blue that beat the world's champion at chess and Watson who is a whiz at the
television quiz game Jeopardy. The problem with these artificial intelligences
is that their intelligence is much too narrow to be considered sentient. They
are more like insects that respond to certain stimulus but are unable to adapt
to unfamiliar experiences. Actually, insects are probably more adaptable than
these machines.
One test that the authors propose is to show our robot a set
of photographs in which some things are way out of whack, such as a man
floating in midair in a business suit checking the time on his wristwatch. A
human, even at a young age, immediately knows that the picture is not reality.
One of pioneers of artificial intelligence, Alan Turing,
proposed that instead of asking whether an AI can think, the question should be
whether a machine, when queried, will give answers that cannot be distinguished
from a human's. The way the test is administered is that a person communicates
electronically with an entity out of his or her sight. The subject may ask
anything he or she likes. If the subject cannot tell that he or she is talking
to a machine, the AI is said to "think."
The authors propose an integrated information theory of
consciousness. Many people have an intuitive understanding that the subjective,
phenomenal states that make up every day experience relates to how the brain
integrates incoming sensory signals with information from memory into a
cohesive picture of the world. To be conscious an entity must have a large
repertoire of information. This is where the weird pictures come into play. Any normal six-year-old can tell you what is
wrong in the picture. No current AI is intelligent enough to do so.
No comments:
Post a Comment