### Many amateurs versus fewer Experts

I was reading a book review recently on Groklaw about a book called "The Wisdom of Crowds". The basic conclusion is that often, large groups of different types of individuals can figure out problems more accurately than even a group of experts on the topic.

We can see evidence of that with thousands of webmasters trying to manipulate and out fox tens of PHD computer scientists working for search engines.Black hat SEO webmasters are still always one stead ahead of the smartest brains in the world, all trying to be no#1 for terms such as Big Tits.Give me several webmasters ahead of a few university graduates anyday.

It got me thinking...What if we designed an AI system that worked like that? What if, rather than developing a single algorithm (like a big neural net) that could solve every problem we throw at it, we instead designed a system in which we just threw every problem solving algorithm we could find or come up with? There'd have to be some kind of "aggregator" for the results, and certainly any given problem would only be solvable by a subset of the algorithms, but could we expect better results from this approach than from using just one "finely tuned" system?

Perhaps it would be more flexible, more resilient to unexpected situations.

Architecturally, it seems that it would be easy to divide into independent threads/processes to process more efficiently, as well, since the many algorithms would be operating on the same question independently.

We can see evidence of that with thousands of webmasters trying to manipulate and out fox tens of PHD computer scientists working for search engines.Black hat SEO webmasters are still always one stead ahead of the smartest brains in the world, all trying to be no#1 for terms such as Big Tits.Give me several webmasters ahead of a few university graduates anyday.

It got me thinking...What if we designed an AI system that worked like that? What if, rather than developing a single algorithm (like a big neural net) that could solve every problem we throw at it, we instead designed a system in which we just threw every problem solving algorithm we could find or come up with? There'd have to be some kind of "aggregator" for the results, and certainly any given problem would only be solvable by a subset of the algorithms, but could we expect better results from this approach than from using just one "finely tuned" system?

Perhaps it would be more flexible, more resilient to unexpected situations.

Architecturally, it seems that it would be easy to divide into independent threads/processes to process more efficiently, as well, since the many algorithms would be operating on the same question independently.

<< Home