Wednesday, February 22, 2006

smart computers

My amusement stems from the fact that both articles address essentially the same concept, which is getting a computer to understand written language, but they have (I assume) never met. Certainly their concepts have not intermingled.

It seems to me that for a computer to "read" and understand, it needs that type of contextual grasp that comes from a lot of experience reading progressively more complex material, filled with metaphor and cultural reference. Most of us start with things like "See Spot run." and build from there, learning about what language means based on what we experience in life and what we see other writing employing.

Would you seriously expect to hand a book like Moby Dick to a 2nd grader and have them read and understand it? Of course not. You know that even if they can read, they cannot likely understand all of the complexities contained therein. Certainly they'd miss many cultural references.

(Actually, I missed almost all the cultural references, too, and I was in 10th grade or I hated the damned book...but that's beside the point.)

The second article's approach seems much more sophisticated, because even though it's still going for "instant understanding", it at least involves a period of learning during which a context system is built up. The likely meaning of words comes from the software's "experience" of what other words it is surrounded by. This is, in a sense, the software tirelessly reading all that material to learn something about language. That is, I would posit, the very basic roots of understanding starting to take hold.

If you have children, you know that when you read to them, they ask you questions, many of them with seemingly obvious answers. We answer them, again and again, gently remembering that with time, they will know these things without asking.

Why, then, are we so abusive to our machines?

Friday, February 17, 2006

Many amateurs versus fewer Experts

I was reading a book review recently on Groklaw about a book called "The Wisdom of Crowds". The basic conclusion is that often, large groups of different types of individuals can figure out problems more accurately than even a group of experts on the topic.
We can see evidence of that with thousands of webmasters trying to manipulate and out fox tens of PHD computer scientists working for search engines.Black hat SEO webmasters are still always one stead ahead of the smartest brains in the world, all trying to be no#1 for terms such as Big Tits.Give me several webmasters ahead of a few university graduates anyday.

It got me thinking...What if we designed an AI system that worked like that? What if, rather than developing a single algorithm (like a big neural net) that could solve every problem we throw at it, we instead designed a system in which we just threw every problem solving algorithm we could find or come up with? There'd have to be some kind of "aggregator" for the results, and certainly any given problem would only be solvable by a subset of the algorithms, but could we expect better results from this approach than from using just one "finely tuned" system?

Perhaps it would be more flexible, more resilient to unexpected situations.

Architecturally, it seems that it would be easy to divide into independent threads/processes to process more efficiently, as well, since the many algorithms would be operating on the same question independently.

Thursday, February 16, 2006

My fat Frog

Hi.I have a frog, a fat frog.