I've been reading a lot lately on evaluating information, especially Web sources. Much of the literature on evaluating Web sources predates the level of sophistication and richness of content we're seeing now: open-access journals, government reports, newspapers, Google's Life magazine image archive, and so on. But one article I skimmed again recently discusses why popularity and relevance, which seem to be (so far as outsiders can determine) two of the major criteria in Google's ranking algorithm, aren't valid for evaluating an information source.
Tell an undergraduate student this and watch the confusion crawl across his/her face. I also happen to think that it's not necessarily true.
What's really going on here is that there are two parts of evaluation. One is, "Is this good information?" The other is, "Should I use it?"
There are scenarios where one might have a valid use for information that one knows is of poor quality, after all. But that's not really my point.
My point is that while popularity is not a good sole indicator of quality, it's worth considering because first of all, it probably put that search result on the first page for you, and secondly, one would do well to think about why so many people are looking for, clicking on, and linking to this think.
Might even be because the information in it is good.
The real criticism here, I think, is of popularity as an authority indicator. A few months ago I came across an article in the computer science literature, from the late 1990s or early 2000s, that suggested exactly this as a search engine algorithm.
Wouldn't that be an interesting idea to get students to unpack?
It's times like these that I wish I had entire semesters, instead of maybe one hour over the course of four years, to get this stuff across.