Thursday, February 7, 2013

Thought Experiments



In a recent Scientific American, I came across an article with the title "Thought Experiments." I had read that Einstein had come up with his Theory of Relativity by using "thought experiments." Of course, his theory has been deemed correct by actual experiments. Until then, it was simply a theory.

What baffled me though about the article was the subheading which was "Some philosophers are doing more than thinking deeply. They are also conducting scientific experiments relating to the nature of free will and of good and evil." What!? This sounded like something I would read in magazine about philosophy or religion, not in a magazine devoted to science.

In the first place, a "thought experiment" is not science, but speculation perhaps bolstered by mathematics as in Einstein's case. This is the kind of thing philosophers, science fiction writers, futurists, prophets and other imaginative thinkers have been doing for thousands of years. One notable "thought experiment" is Rene Descartes' reasoning from "I think therefore I am" as the one irrefutable assumption to several other conclusions, all of which were refuted by later philosophers.

As to the "nature of free will," I believe psychologists have been doing actual experiments on this for some time without coming to any definite conclusions. Philosophers have debated "free will" to death. And then when the author throws in "good and evil" in the mix, these terms have no intrinsic meaning. Every person on this planet has a different idea of what is good and what is evil.

I read further in the article and find that persons the author calls "experimental philosophers" team up with psychologists and publish in journals. "They have spawned hundreds of papers and come up with surprising results and some strong opinions on every side." Note that he does not say that they have come up with any actual scientific facts.

The article blabs on this fashion for three pages, mixing "thought experiments" with some actual studies in psychology in this strange manner. He concludes with "... it can sometimes be helpful, and occasionally indispensable, to have a better understanding of the cognitive processes that give rise to these beliefs." Duh!

Shame on you Scientific American for printing such nonsense.     

Friday, February 1, 2013

Test for Conciousness



One of the most popular themes in science fiction is the robot or computer who develops consciousness, usually to the detriment of humanity, but not always. I've written such stories myself. In For the Love of Kumiko a man falls in love with a sentient android who eventually wishes to be free. The Isaac Project is about the development of a sentient robot. A well known example by another author are the robot novels by Isaac Asimov. Recently the SyFy channel broadcast a series called Caprica.  There are many more examples, too numerous to mention.

In a recent Scientific American article by Christof Koch and Giulo Tomoni, the question is asked, how do we test for consciousness. In other words, how will we know when we have developed a sentient robot or computer? They point out that computers today can do amazing things. Some examples are Big Blue that beat the world's champion at chess and Watson who is a whiz at the television quiz game Jeopardy. The problem with these artificial intelligences is that their intelligence is much too narrow to be considered sentient. They are more like insects that respond to certain stimulus but are unable to adapt to unfamiliar experiences. Actually, insects are probably more adaptable than these machines.

One test that the authors propose is to show our robot a set of photographs in which some things are way out of whack, such as a man floating in midair in a business suit checking the time on his wristwatch. A human, even at a young age, immediately knows that the picture is not reality.

One of pioneers of artificial intelligence, Alan Turing, proposed that instead of asking whether an AI can think, the question should be whether a machine, when queried, will give answers that cannot be distinguished from a human's. The way the test is administered is that a person communicates electronically with an entity out of his or her sight. The subject may ask anything he or she likes. If the subject cannot tell that he or she is talking to a machine, the AI is said to "think."

The authors propose an integrated information theory of consciousness. Many people have an intuitive understanding that the subjective, phenomenal states that make up every day experience relates to how the brain integrates incoming sensory signals with information from memory into a cohesive picture of the world. To be conscious an entity must have a large repertoire of information. This is where the weird pictures come into play.  Any normal six-year-old can tell you what is wrong in the picture. No current AI is intelligent enough to do so.