Register Sunday | May 5 | 2024

The Weakest Link

In the battle over Web 2.0 Wikipedia.org is under attack—but we shouldn’t give

Last month the Web’s latest darling, Wikipedia.org, found itself under attack by the mainstream media. This wasn’t the first time that the community-authored encyclopedia, which features entries on everything from Back to the Future’s flux capacitor to Georg Wilhelm Friedrich Hegel, found its revered place at the cool-kids table challenged, but it may have been the first time it was challenged globally. “Can you trust Wikipedia?” demanded South Africa’s Mail & Guardian newspaper. “Though often good, Wikipedia is of uneven quality and has its share of bogus entries,” complained Wired News. “Wikipedia founder admits to serious quality problems,” announced The Register in the UK, followed by a cheeky: “Yes it’s garbage, but it’s delivered so much faster!”

The attack marked a shift in general sentiment towards the site, which boasts a 2004 Webby Award for best community site, a Golden Nica from ARS Electronica 2004, and an audience that has quadrupled in the last year. The catalyst for this souring of the Wikipedia love-in was an online essay by Nicholas Carr, former executive editor of the Harvard Business Review. Wikipedia is actually only the most prominent example in his blog post entitled “The Amorality of Web 2.0”, which criticizes something much bigger: the very direction in which the Web is heading.

Carr attacked the Web’s most vocal supporters and their vision for its future. He derided their language of rapture (“Could it be that the Internet—or what O'Reilly calls Web 2.0—is really the successor to the human potential movement?" asked Wired in a gushing profile of “Internet savant” Tim O’Reilly) and the spread of the “cult of the amateur,” which favours open-community involvement by the masses over authorship by the experts. He levelled his charges at the latest crop of popular Web applications as a group but it was Wikipedia that took the fall, probably because nobody can really put their finger on what “Web 2.0” actually is.

The problem is that “Web 2.0” is a marketing term. While brainstorming to find a title for a conference on the changing Internet economy, publishing house O’Reilly Media and marketing firm MediaLive came up with the name in 2004. The basic idea: Web businesses and trends that survived the dot-com crash are generally of a different character than those that didn’t—but that character is shifting and elusive. O’Reilly and MediaLive basically threw all those sites into a pot and dubbed the jambalaya “Web 2.0.” This explains why most definitions of the phenomenon involve examples: weblogs replaced personal Web pages, Flickr replaced Ofoto, Wikipedia replaced Britannica Online and Google replaced Yahoo’s index. Internet pundits have been backtracking for the past year to try to come up with a definition that ties all these sites together but with limited success. In an effort to clarify things, Tim O’Reilly set out in September of this year to write the decisive definition, but it runs at about fourteen pages. So much for clarity.

For the average person, what Web 2.0 boils down to is web content that is authored, delivered or influenced primarily by an open community. (There are a bunch of other criteria that deal with data sources, multiple-device software and scalability, but as far as you and I are concerned, “open community” is the key.) While Yahoo had a small group of human editors, Google automatically calculates a site’s popularity within the entire Web to determine its page rank. Blogs can be referenced and commented on by other blogs or anonymous users. Flickr lets photographers’ friends attach notes to any picture. Wikipedia content can be edited at any time by anyone.

The problem, says Carr, is that content which can be edited at any time by anyone is not necessarily good. He gives examples of poorly written articles and straight-up factual errors in Wikipedia, then scolds, “it seems fair to ask exactly when the intelligence in ‘collective intelligence’ will begin to manifest itself.” He advocates a return to the Britannica Online model which relies on authorship by experts, but complains that, “The promoters of Web 2.0 venerate the amateur and distrust the professional.” The point is well taken—a reference site is only as strong as its weakest link, pun intended. When your pool of amateur authors includes every human being on the planet with the means to poke at a keyboard … well, who knows what Paris Hilton is contributing?

In other words, we need to step back and evaluate whether we’re building mediocrity and inaccuracy into the very tools that power the Web. False information has always found its way online, but Web 2.0 makes it so much easier to introduce mistakes into the sites we use most often. Is this the future we want?

Wikipedia’s supporters argue that millions of eyes will find any and all errors—or, to use the open-source adage, “given enough eyeballs, all bugs are shallow.” But is this true? A recent article in the UK’s Guardian newspaper asked experts in various fields to score Wikipedia entries. A quick tally of the results gave the site a pitiful average accuracy rating of 56 percent. (It should be noted, however, that just because someone is an expert, it doesn’t mean they’re fair: Vogue editor Alexandra Shulman, in predictable diva fashion, gave the haute-couture entry a “0/10” with the explanation: “As a very, very broad-sweep description there are a few correct facts included, but every value judgment it makes is wrong.”)

The “can I trust the Web?” argument is as old as the Internet itself. In fact, the “consensus” versus “credentials” battle over Web 2.0 has echoes of the canon debate that ripped across American colleges in the late 1980s and early 1990s. Faced with a student body growing ever more liberal, conservative academics began to consider bumping Dante off the curriculum, thereby freeing up resources for students who would rather study “Crime Fiction of the Twentieth Century” or “World Literature in English.” (Liberal academics would argue that the debate was about respecting different viewpoints.) In his book The Closing of the American Mind, Allan Bloom argues that the canon was necessary because without it students would be unable to distinguish between "the sublime and the trash, insight and propaganda." For Bloom, good teachers helped students sort the Chaucers from the Chandlers. Sure, the New York Times Best-Seller List proved that more people were reading Stephen King than Shakespeare, but did that mean that they should be?

The question of whether we need experts to guide us is a critical one as we push forward with the Web. But the irony, which Carr seems blissfully unaware of, is that by attacking O’Reilly—an established technology publisher and pundit since the Web’s inception—he is participating in the exact process he’s mocking. When it comes to social software and Internet technologies, he is the amateur and O’Reilly the professional. Carr’s criticisms are important, but that’s the point: everyone should have their say. When Web 2.0 works, it works because the important stuff gets propagated, which is exactly what happened when Carr’s essay got spread around via weblogs. The glut of mediocre content has been a problem with the Internet since the beginning, but if blogs and Google and other post-dot-com tools have done anything, they’ve partially alleviated the problem by raising the worthwhile stuff to the top of the heap.

Wikipedia has faults, but the site should get better at ironing those out—slowly, gradually—as it garners more attention and respect. Wikipedia’s users have been reacting to the media scrutiny in the smartest way possible—by using expert evaluations to improve the site. User Tim Chambers flagged the entries under fire by the Mail & Guardian for review, and other users have rewritten them. A follow-up article in the South African newspaper says the offending references have since been cleaned up.

A few years ago, most journalists viewed the weblog community as a novelty; now an increasing number of reporters and publications run their own blogs. In the same respect, Wikipedia will become a premier resource only when the experts begin to care enough to participate and contribute themselves—rather than criticize weak entries, perhaps they’ll just fix them. While the Alexandra Shulmans of the world may never stoop so low, others hopefully will.

The main fault in Carr’s argument against Web 2.0 is that he casts the problem as one of expert versus amateur, when in fact both exist on the Web. It’s no longer an either/or situation. Web 2.0 gives people with knowledge and credentials the tools to fix others’ mistakes. And if we’re lucky, they’ll take up the challenge.