6 minute read

Edge.org is running an article by Jaron Lanier on the current drive towards meta-content and collective-rule on the Internet (think Wikipedia, BoingBoing, Digg, etc.), and some responses from leaders of "The Collective".

A core belief of the wiki world is that whatever problems exist in the wiki will be incrementally corrected as the process unfolds. This is analogous to the claims of Hyper-Libertarians who put infinite faith in a free market, or the Hyper-Lefties who are somehow able to sit through consensus decision-making processes.

What we are witnessing today is the alarming rise of the fallacy of the infallible collective. Numerous elite organizations have been swept off their feet by the idea. They are inspired by the rise of the Wikipedia, by the wealth of Google, and by the rush of entrepreneurs to be the most Meta. Government agencies, top corporate planning departments, and major universities have all gotten the bug.

What I’ve seen is a loss of insight and subtlety, a disregard for the nuances of considered opinions, and an increased tendency to enshrine the official or normative beliefs of an organization. Why isn’t everyone screaming about the recent epidemic of inappropriate uses of the collective? It seems to me the reason is that bad old ideas look confusingly fresh when they are packaged as technology.

There's a trade-off that we've not yet dealt with as a society; with the networked world, we have massive flows of data sitting in front of us on a daily basis. Multiple sources of 24 hours news feeds on the TV, via RSS, in newspapers, radio, and original (media-based) websites. We have emails and press releases, journal entries by our friends, IM conversations, online essays, email listservs and virtual communities which we keep up with. It's data overload. So the current pushback is the Google, BoingBoing and Wikipedia collective approach; we trade the diversity of voices talking at us for trust in these sites to distill the information into more manageable channels. so instead of surfing the net and news for amusing oddities and cyber-libertarian chatter for 3 hours a day, I read BoingBoing. Instead of reading news articles from the Post, Times, or watching CNN, I'll check Google's news page for articles I'm interested in (and fine, I'll admit to reading the BBC World newsfeed). Instead of sifting through Google search results for a piece of factual information, I'll check out Wikipedia.

What I suppopse is troubling is that we've always filtered information this way; I challenge anyone to argue that new agencies don't filter what stories they print (though I'd like to think that at some golden age in the past it wasn't as partisan). I trust Google to implement their filtering programmatically and reasonably fairly. Wikipedia is less trustworthy, particularly on touchy subjects, but the tradeoff there is that Wikipedia is likely to have almost instantaneously updated information, whereas Brittanica simply can't.

I echo some of the other critiques by saying that his argument doesn't scale out very well. Is Open Source also blind trust in the collective intelligence? It would seem that, for the most popular and mission-critical OSS projects, they provide more stable and powerful solutions than their closed-source competitors.

I think there's definitely some worry in placing too much trust in the Collective/Hive Mind -- but one need look no further than his discussion page on wikipedia, where there's a short debate on what should and should not be part of his bio, or the semi-democratic deletion attempt made on the bio of Henry Farrell, a political scientist who studies IT and privacy issues at GWU. There are very definite conflicting personalities under the hood of Wikipedia, the monolithic collective movement of its articles is just a pleasant facade.

Also, and this I think is the root of the matter, I have problems taking cheek about digital collective maoism from a white guy with dreads.