The basic idea in case-based, or CBR, is that the program has stored problems and solutions. Then, when a new problem comes up, the program tries to find a similar problem in its database by finding analogous aspects between the problems.
We wanted to solve robot problems and needed some vision, action, reasoning, planning, and so forth. We even used some structural learning, such as was being explored by Patrick Winston.
You don't understand anything until you learn it more than one way.
In general we are least aware of what our minds do best.
This is a tricky domain because, unlike simple arithmetic, to solve a calculus problem - and in particular to perform integration - you have to be smart about which integration technique should be used: integration by partial fractions, integration by parts, and so on.
Societies need rules that make no sense for individuals. For example, it makes no difference whether a single car drives on the left or on the right. But it makes all the difference when there are many cars!
There are three basic approaches to AI: Case-based, rule-based, and connectionist reasoning.
Kubrick's vision seemed to be that humans are doomed, whereas Clarke's is that humans are moving on to a better stage of evolution.
I heard that the same thing occurred in a scene in Alien, where the creature pops out of the chest of a crewman. The other actors didn't know what was to happen; the director wanted to get true surprise.
No computer has ever been designed that is ever aware of what it's doing; but most of the time, we aren't either.
I think Lenat is headed in the right direction, but someone needs to include a knowledge base about learning.
If you just have a single problem to solve, then fine, go ahead and use a neural network. But if you want to do science and understand how to choose architectures, or how to go to a new problem, you have to understand what different architectures can and cannot do.
Once when I was standing at the base, they started rotating the set and a big, heavy wrench fell down from the 12 o'clock position of the set, and got buried in the ground a few feet from me. I could have been killed!
I believed in realism, as summarized by John McCarthy's comment to the effect that if we worked really hard, we'd have an intelligent system in from four to four hundred years.
There was a failure to recognize the deep problems in AI; for instance, those captured in Blocks World. The people building physical robots learned nothing.
For un-subscribe please check the mail footer.