I said something to this effect last week on Uncontrolled Vocabulary, but it bears repeating.
ACRLog discusses algorithmic attempts to authenticate online information, touching on, among other things, the recent Wired story about the Wikipedia Scanner, which mines IP addresses from Wikipedia edits to find out just who’s saying that Diebold never makes mistakes or what have you.
It strikes me that all these efforts are related to the seemingly unending desire that people have for a quick and dirty route to authoritative information. What they’re looking for, I suspect, is a label (a metaphor that, as a former anti-sweatshop activist, holds a good deal of meaning for me). People like labels, and I am no exception. “Oh, okay, it’s fair trade coffee, so I’ll get that.” “Oh, this is free range chicken.” “Oh, this won the National Book Critics Circle Award.” But it doesn’t work that way. You can’t say, “Oh, I believe everything in the Encyclopaedia Britannica,” and leave it at that.
There’s no such label for information, not in any grand sense. An algorithm might help you trace an IP address and learn the probably identity of a contributor to a wiki, but you’ll still need to know somthing about who that person or entity is and what their biases are before you can know whether their statements are trustworthy. I won’t even get into the profound political implications of slapping an “authoritative” label on information, as I trust you’ve all read Orwell and school history textbooks and so on. But there are days when I think that’s what Google is trying to do–not organize the world’s information and make it universally accessible and useful, but organize and filter and, in doing so, suggest an authority to those first ten search results that they may or may not possess. It’s almost as if the purpose of organizing all that information is to prohibit critical thinking, not to promote it.
That’s hardly a new practice, of course–but the tools used to do it now are much bigger, much broader, and much more pervasive.