Home   » The idea of g

  » Types of Intelligence

  » Intelligence, Heredity, and Environment
        History
        Evidence for Nature
        Evidence for Nurture
        Comments on Research
        Conclusion

  » Neuropsychological Testing
        Normal Intelligence
        Abnormal Examination             and Brain Trauma
        Personality

  » Spectroscopy Data

  » Disorders related to Intelligence

  »  Gender Differences
       Self-Estimated              Intelligence
       Anatomical Differences
       Gray vs. White Matter

  » Artificial Intelligence
        A Timeline of AI
        Ancient History of AI
        Modern History of AI
        The Future of AI

  » Age and Intelligence
        Areas of Function
        Effects of Lesions

  » References






The “Modern” History of Artificial Intelligence and Programs

In 1956, at the Dartmouth Conference John McCarthy said that artificial intelligence is "making a machine behave in ways that would be called intelligent if a human were so behaving".  Looking back on this definition, people today seem to disagree with it since it ignores the possibility of achieving strong artificial intelligence.

I like this definition of artificial intelligence because it does not rule out the possiblilty for strong AI.  In fact, it does leave the possibilities quite open.  Artificial Inteliigence is "intelligence arising from an artificial device" (Wikipedia.com).
 

There many many different definitions fo AI.  Most definitions could be categorized as concerning either systems that think like humans, systems that act like humans, systems that think rationally or systems that act rationally.  (Wikipedia.com)  So as you can see, it is difficult to come up with a truly precise definition as to what exactly artificial intelligence is in modern terms.

Here is a look at some early AI programs that helped pave the path into the modern day for artificial intelligence and robotics:

The first working AI programs were written in the UK by Christopher Strachey, Dietrich Prinz, and Anthony Oettinger. Strachey taught at Harrow School and he was also an amateur programmer.  Later Strachey became Director of the Programming Research Group at Oxford University. Prinz worked for the engineering firm of Ferranti Ltd.  This engineering firm would become famous for building the Ferranti Mark I computer in collaboration with Manchester University (This computer is the computer that contained the earliest artificial intelligence programs, and also ran them). Oettinger worked at the Mathematical Laboratory at Cambridge University, home of the EDSAC computer.

(Ferranti Mark I computer)

(EDSAC computer)

Strachey decided that the game of checkers (a.k.a. draughts) would be ideal for creating his first machine that could play a game. In May 1951, Strachey initially coded his checkers program for the pilot model of Turing's
Automatic Computing Engine.  His efforts were unsuccessful as the machine did not work.  Coding errors and hardware changes were what defeated led to the demise of his program.  Strachey was very dissatisfied with the method employed in the program which evaluated board positions. He moved forward, using his dissatisfaction to fuel his creativity, and wrote an improved version for the Ferranti Mark I at Manchester.  By the summer of 1952, his new version could "play a complete game of Draughts at a reasonable speed", said Strachey.

Prinz's chess program, also written for the Ferranti Mark I, first ran in November 1951. The program would examine every possible move until a solution was found. On average several thousand moves had to be examined in the course of solving a problem.  For this reason and the fact that the processing speed for computers was pretty slow back then, the program would take a very long time to choose the move in which it should make.

Turing started to program what he called his “Turochamp chess-player” on the Ferranti Mark I.  What made this program different was that the Turochamp could play a complete game and operated not by exhaustive search but under the guidance of rule-of-thumb principles devised by Turing.  Unfortunately, he was not able to finish this project. 

The first AI program to run in the U.S. was also a checkers program, written in 1952 by Arthur Samuel of IBM for the IBM 701. Strachey had publicized his program at a computer conference in 1952; Using this, Samuel worked over a period of years to come up with a better and faster program. In 1955 he added features that enabled the program to learn from experience, and therefore improve its play.  This was key to the eventual progress that would be made in computing programs and artificial intelligence. Samuel included mechanisms for both rote learning and generalization.  Samuel made continuous improvements to the program until it finally reached the point in which it beat a former Connecticut checkers champion in 1962.
 
  (IBM 701 computer)

*An aside - About 19 IBM 701 machines were built from 1952 to 1955. Most of these were sent to government agencies for defense, atomic research, navy and the weather bureau.



To further enhance his program, Samuel added a type of learning called heuristics.  This form of learning utilized the fact that two programs could play against each other and if one program beat the other, the program who was defeated changed it's tactics to match the one that beat it.  This seemed to be biological evolution (except with no biology).  In essence, the smarter survived and only that program would carry on.  

(The History of Computing. http://www.alanturing.net)

Natural language communication: Is it possible?

In 1966 the programs called Eliza and Parry were invented. Parry, written by Stanford University psychiatrist Kenneth Colby, simulated a human paranoiac. Parry's responses are capitalized in the following extract from a "conversation" between Parry and a psychiatric interviewer.

Psychologist: Why are you in the hospital?
Parry: I SHOULDN'T BE HERE.

Psychologist: Who brought you here?
Parry: THE POLICE.

Psychologist: What trouble did you have with the police?
Parry: COPS DON'T DO THEIR JOB.

Psychologist: What do they do instead?
Parry: COPS ARREST THE WRONG PEOPLE.

Psychologist: They arrest the right ones sometimes.
Parry: THE LAW SHOULD GET THOSE ITALIAN CROOKS.

Psychologist: Are you Italian?
Parry: MY BACKGROUND IS BRITISH BUT I WAS BORN IN THIS COUNTRY.

Psychologist: Do you know any Italian crooks?
Parry: I TRY TO AVOID THE UNDERWORLD.

What is amazing about these few lines taken from an interview was that psychiatrists were asked to say whether this was a paranoic person or a computer, and some of the psychiatrists thought it was an actual person.

Eliza was written by Joseph Weizenbaum at MIT in the same year as Parry.  Eliza was supposed to represent a human therapist.  Read the lines and you will be amazed at the life-like qualities Eliza can reproduce.

Man: Men are all alike.
Eliza: IN WHAT WAY

Man: They're always bugging us about something or other.
Eliza: CAN YOU THINK OF A SPECIFIC EXAMPLE

Man: Well, my boyfriend made me come here.
Eliza: YOUR BOYFRIEND MADE YOU COME HERE

Man: He says I'm depressed much of the time.
Eliza: I AM SORRY TO HEAR THAT YOU ARE DEPRESSED

Man: It's true. I am unhappy.
Eliza: DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY

Man: I need some help, that much seems certain.
Eliza: WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP

(The History of Computing. http://www.alanturing.net)

Both Eliza and Parry were deemed to have the intelligence of a "juke-box", as one psychologist explained.  They both used simple techniques of word recognition and play-back.  But even so, I think these two programs are pretty neat for their time.

Learning in AI:
Learning is distinguished into a number of different forms. The simplest is learning by trial-and-error. The simple memorizing of individual items, solutions to problems, words of vocabulary, etc…is known as rote learning. 
Rote learning is easy to perform on a computer.  However, generalization is hard to work into a computer program.  What I mean by generalization is that the program will be able to come up with a solution when a problem is faced that has not been previously tackled.  Ex: If the computer program was used to add simple numbers, if it had not been introduced to 2+4=6 previously, it would have a hard time coming up with the correct answer, 6. 

GA – Genetic Algorithm : More than modern, perhaps future

GA is the first in a field that is called "evolutionary computing".  It was introduced by a man named John Holland in 1975 in collaboration with his research group based out of the U. of Michigan, Ann Arbor.  GA's are similar to [the concept Darwin gave us] natural evolution.  This type of computing produces successive generations of software that increasingly work better and better for their specific goal(s). 

Current use of a GA system can be seen in detective work.  A witness in coorperation with a GA system produce a face that becomes increasingly similar to the face of the criminal that the witness recollected.

(The History of Computing. http://www.alanturing.net)

I have found a flowchart that will help you understand the entire process that takes place in a GA system

(Flowchart taken from http://www.sv.vt.edu/classes/ESM4714/Student_Proj/class94/