Saturday, March 31, 2012

Android Bodies

In another post I blogged about how a humanoid robot (sometimes called an android) could be built. I divided the task into three parts, the robot's body, brain (or computer) and its programming (intelligence). In this article I'd like to get into more detail about what is required to design and build the android's body.

First it needs a power supply. At present this is likely to be chargeable batteries. Other possible sources could be compressed gases, hydraulics, flywheel energy, the decay of organic material, nuclear fusion, when and if it ever becomes available, or other radioactive sources, solar energy, etc. For the purpose of a man-like frame, many of these possibilities are either too bulky or too complicated.

The next thing we need to think about are actuators which are the parts that convert the stored energy into movements. In humans and animals it is muscles that do this job. At present most robotic actuators are electric motors which turn a wheel at a joint. A spring can be part of the motor actuator for improved force control, particularly needed for walking. Another method is to use wire that contracts and expands when electricity is applied and released. This would work similarly to muscles in a human being. New plastic materials that expand and contract have been used in facial muscles and arms of animatronic (humanoid appearing) robots.

One big problem to be solved is a sense of touch. A humanoid robot must be capable of determining how much pressure to apply with its hands to grasp items properly. Current robotic and prosthetic hands receive far less tactile information than the human hand. Recent research has developed a tactile sensor array that mimics the mechanical properties and touch receptors of human fingertips.

Another important sense is vision. Our android must be capable of recognizing and distinguishing between objects and of estimating their relationship to each other. There is an entire subfield of artificial intelligence concerned with designing systems that mimic the processing and behavior of biological systems.

Another difficult problem to solve is walking. Several robots have been made which can walk reliably on two legs. However, none have ever been built that are as robust as a human being. Some have said that Sony's robot Asimov walks as though it had to use the toilet. Nonetheless, several robots built by Marc Albert of MIT have successfully demonstrated very dynamic walking, even running and performing somersaults.

Speech recognition is a requirement if the humanoid robot is going to interact with human beings. Interpreting a continuous flow of sounds coming from a human in real time is a difficult task. Sometimes we don't understand each other when one has a different accent than he or she normally is used to. Currently the best systems can recognize continuous natural speech up to 160 words per minute with an accuracy of ninety-five percent.

There are many other design considerations. The ones I have listed are the most difficult to achieve.

Sunday, March 25, 2012

Artificial Intelligence

In many science fiction stories, there are electromechanical devices, robots and computers who are at least as smart as human beings and sometimes smarter. But, what is the reality? Is it possible to build a machine that "thinks" as well or better than a human being? Or is this simply an impossible dream and will never happen? If artificial intelligence (abbreviated AI) is possible, how close are the computers of today towards that goal?

Like most questions of this sort, it depends upon the definition of artificial intelligence. There is no consensus even within the AI scientific community. Elaine Rich in her book, Artificial Intelligence, defines it this way: "Artificial intelligence is the study of how to make computers do things at which, at the moment, people are better." One good example of something that fits this definition is chess playing. Once it was thought that people who played darn good chess were such geniuses that no machine could ever beat them. Perhaps they are. But in 1997 the supercomputer Deep Blue beat the world chess champion, Gary Kasparov. Nonetheless, chess aside, Gary Kasparov can do many things that Deep Blue cannot. A chess program go only do one thing well, and that is play chess. It is like an idiot savant.

A better definition of what we would expect from an AI is as follows: "Artificial intelligence is the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior." This quote is from Avron Barr and Edward A. Feigenbaum's book, The Handbook of Artificial Intelligence. But what are these characteristics? In the book, Godel, Escher, Bach: An Eternal Golden Braid, by Douglas R. Hofstadter, Hofstadter gives the following "essential abilities for intelligence" (by the way, I highly recommend this book, which is entertaining as well as informative):

Ÿ "To respond to situations very flexibly."

Ÿ "To make sense out of ambiguous or contradictory messages."

Ÿ "To recognize the relative importance of different elements of a situation."

Ÿ "To find similarities between situations despite differences which may separate them."

Ÿ "To draw distinctions between situations despite similarities which may link them."

The problem is that the abilities, such as those listed above, that are easy for human beings, are very difficult to program into a computer. Nonetheless, progress has been made. Some areas of research where machine intelligence has come a long way are:

Expert Systems: Software designed to act as an expert in a particular area of expertise, for example, an income tax consultant. I happen to use one of these every year to do my taxes and believe me, it's a lot better than trying to make sense of the U.S. Tax Code yourself.

Natural Language Processing: Software that understands and/or generates a natural language such as English. Translation software also fits into this category. I have more to say on this subject below.

Speech Recognition: Hardware and software that understands human speech. I've noticed that lately that many automatic phone answering services now use this technology.

Computer Vision: Hardware and software that can interpret visual images.

Robotics: A robot is a machine that can perform manual tasks that previously were performed by a human being, such as vacuuming a rug or assembling automobiles or dancing. I have Rhomba vacuum which does a tolerable job, but sometimes get stuck under low hung furniture.

Computer Assisted Instruction: Teaching machines. This was kind a fad for a while, but doesn't seem to be used much anymore.

Automatic Programming: Software that can create other software.

Planning and Decision Support: Software that aids planning.

Expert Systems

"An expert system is a class of computer programs developed by researchers in artificial intelligence. In essence, they are made up of a set of rules that analyze information (usually supplied by the user of the system) about a specific class of problems, as well as provide analysis of the problem(s), and, depending upon their design, recommend a course of user action in order to implement corrections."

I got this definition from Wikipedia in an article that gives a good introductory explanation of this branch of artificial intelligence. For a deeper understanding what is meant by an expert system, you may want to read the article. I'll try to summarize as briefly as I can.

The idea behind expert systems is to provide help usually provided by an expert in a particular field, such as software troubleshooting or diagnosing an illness in a medical patient. Three features of expert systems are rules of thumb, fuzzy logic and a data base of solutions. When an expert in a field, such a physician, goes about solving a problem, such a determining what ails a patient, he or she usually has several rules-of-thumb that he or she uses. Depending upon the answers to key questions about the problem, the expert knows what the solution is by applying a rule of thumb. For example, suppose a patient complains about frequent severe headaches. After asking questions about the headaches and other accompanying symptoms and perhaps performing some tests, the doctor may determine that the person is suffering from migraines and prescribe pills. In expert systems, these rules of thumb are coded into the software.

Fuzzy logic is logic based on approximations rather than formal logic. It takes into account such vague statements as "almost," "nearly," and so forth, and manipulates them to come up with an approximate answer. For example, if a patient asks how much pain he or she is in and replies "not so much," this is considered less pain than "it hurts terribly." Certain conclusion may be drawn by which answer is given.

Expert systems also usually have large data bases which can be readily accessed using the rules of thumb and fuzzy logic.

Anyone who has gone to a software web site and used their self troubleshooting system has used an expert system. Computer and video games also use expert systems.

In my novel, The Isaac Project (available at Renaissance Pageturner Editions, http://www.pageturnereditions.com), the core software of the artificial intelligence being developed is an expert system.

Natural Language Processing

If you were going to design a humanoid robot, one of the most important abilities it must have is the ability to understand human speech, at least to the point where it could understand the commands you give it. It would also be nice if it would talk back to you. To be able to communicate with your computer in a normal conversational way would also be a good thing. You may have also noticed that lately, when you call certain businesses, you don't necessarily have to press buttons to enter information to their automated answering systems. Some allow you to speak the required information. All these artificial intelligence tasks fall under the province of natural language processing. Other tasks that require natural language processing are translation from one human language to another, transforming text to speech, answering questions, and retrieving information.

Natural language processing is the study and software development associated with the automatic generation and understanding of natural human languages. Natural language generation software converts information from computer data bases into normal human language. Natural language understanding software converts human language into forms that a computer can understand and manipulate.

One of the earliest systems, called SHRDLU, used a restricted world of blocks. It used a small restricted vocabulary to manipulate blocks of different shapes and sizes on a computer monitor screen. Because it worked extremely well, researchers were excessively optimistic about developing natural language software. However, it turned out that in the real world, language processing was much more difficult than supposed.

Some of the problems are: Ambiguity. For example when it is not clear which word in a sentence an adjective or adverb is modifying. Some strings of words can be interpreted in many ways. In spoken words, sounds that represent successive letters blend into each other. Some written languages, such as Chinese and Thai, do not signal word boundaries. Most words have several meanings. The grammar for natural languages is ambiguous. Typing errors, speech irregularities and OCR errors. Some sentences don't literally mean what they say.

Many of these problems have been partially or wholly solved, but artificial intelligence experts still have a long way to go before you can have an intelligent conversation with your computer or friendly robot.

I note with interest the various web sites with talking heads called chatbots. I urge you to visit one of these sites to learn what a natural language artificial intelligence artifact can do. A popular one is called The ALICE Chatbot Foundation.


Saturday, March 17, 2012

Isaac Asimov's 3 Laws of Robotics

In Science Fiction, the Three Laws of Robotics are a set of three rules written by Isaac Asimov, which most robots that appear in his fiction must obey. Introduced in his 1942 short story "Runaround," the Laws state the following:

A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.

If we could actually build robots who are intelligent enough to be self-aware, would these laws actually make sense. I propose not. Take the first law. In the first place, how could the robot tell a human being from another robot that looked like a human being or from a hologram of a human being. You might say, so what. As long as the robot cannot harm a human being or anything that resembles a human being, that is all to the good. But what if a humanoid robot or hologram and a real human being are both in danger. How would the robot know which one to save? As far as that goes, if two human beings are in danger at the same time, how does a robot know which one to save. (Note: this exact situation is shown in the movie I, Robot. In the movie the robot made the wrong choice.)

For certain uses, a manufacturer would not want to apply the Laws in that order. For example, suppose the robots are to be used for military purposes. In this case, the Laws built into the robot might go something like this:

A robot must obey the orders given to it by his superior officer.

A robot must protect its own existence, and those of other soldier robot, except where such orders conflict with the First Law.

A robot may only harm those human beings or robots designated as "The Enemy," by its superior officer and only if not under a flag of truce, surrendering or designated as "Prisoners of War."

In my novel, The Isaac Project, the situation of the military wanting to change the Three Laws provides part of the conflict in the book.

One error that Isaac Asimov made was that he assumed that the intelligence of the robot would somehow be in its electronic circuitry. Actually, we know now that the intelligence of a robot would more likely be in its software. This changes the situation quite a bit, since software can have errors in it that are not always detected during testing. Also, it can be modified. Depending upon how the software is installed, it might be subjected to viruses, worms, and other sorts of malicious software tricks by unscrupulous hackers, such as our computers are now.