Omission 
Wikipedia has no entry for "homeostatic property cluster".

Up until just a moment ago, it did not even list it as a possible interpretation of the acronym HPC.

[ add comment ] ( 930 views )   |  [ 0 trackbacks ]   |  permalink
What I'm reading now 
In What to believe now [Amazon/GoodReads], David Coady sets out to do applied epistemology. Most of the book is about expertise and democracy,* which is fine. With the caveat that I haven't read it all, I'll lament that fact that most of the book seems to be Coady summarizing and critiquing Alvin Goldman's work on these topics. Missing, for example, is any discussion of Harry Collins' work on expertise.

Wikipedia is discussed for three pages in the conclusion, and I'll focus on that because he quotes me. Coady writes:
Most contributors to Wikipedia, unlike most rumor-mongers, see themselves as engaged in a single collective enterprise. This enterprise is governed by rules, and Wikipedia has a hierarchy that seeks to enforce those rules. So, when P.D. Magnus characterizes the claims made in Wikipedia as "more like 'claims made in New York' than 'claims made in the New York Times" he is mistaken. ... Wikipedia is a reasonably reliable source for a reasonably wide range of subjects because of the contingent fact that it has a reasonably good culture at the moment.

Perhaps the rhetorical flourish in the passage he cites overstates my point, because claims in Wikipedia fall under one institutional umbrella in a way that claims made in New York do not. But my point is that there is sufficient variation in the quality and reliability of Wikipedia articles that it is wrong to treat them all together. Even though it is 'reasonably reliable' across a 'reasonably wide range', it is better to pay attention to the kind of article that you are consulting. Wikipedia is large enough that it is better to think of it as multiple overlapping communities, rather than as a single monolithic culture.

* EDIT: As Coady points out in the comments, there are also chapters on rumours, conspiracy theories, and blogging.

[ 2 comments ] ( 19011 views )   |  [ 0 trackbacks ]   |  permalink
Digesting the whole Wikipedia 
In the most recent issue of First Monday, Royce Kimmons has an interesting analysis of community contributions in Wikipedia. His results suggest that most particular entries are the work of separate contributions by a small number of people, rather than the efforts of an ongoing community. The cool thing is that it is a systematic study of all Wikipedia entries and histories.
Abstract: Wikipedia stands as an undeniable success in online participation and collaboration. However, previous attempts at studying collaboration within Wikipedia have focused on simple metrics like rigor (i.e., the number of revisions in an article’s revision history) and diversity (i.e., the number of authors that have contributed to a given article) or have made generalizations about collaboration within Wikipedia based upon the content validity of a few select articles. By looking more closely at metrics associated with each extant Wikipedia article (N=3,427,236) along with all revisions (N=225,226,370), this study attempts to understand what collaboration within Wikipedia actually looks like under the surface. Findings suggest that typical Wikipedia articles are not rigorous, in a collaborative sense, and do not reflect much diversity in the construction of content and macro–structural writing, leading to the conclusion that most articles in Wikipedia are not reflective of the collaborative efforts of the community but, rather, represent the work of relatively few contributors.


[ add comment ] ( 3389 views )   |  [ 0 trackbacks ]   |  permalink
Short subject on featured articles 
In my little study of Wikipedia, I initially stumbled on the difference between featured and regular articles. If I had thought about it in advance, I would not have tested any featured articles at all. I had included them, however, so I reported the results and suggested that the data about featured articles be set aside.

This was not an admission that featured articles were especially reliable, but just that they were different. They needed to be thought of as a separate population.

Now somebody has taken a look at them. In this week's First Monday, David Lindsey directly evaluates the quality of featured articles; Evaluating quality control of Wikipedia's feature articles. The upshot is that many featured articles are good but that some are terrible. They are, despite the 'feature' glitter, much like the rest of Wikipedia. He concludes with the suggestion that, "[t]o put it simply, being a featured article may not mean much at all."

As a methodological aside, Lindsey evaluated the current version of specific articles rather than the development of those articles across time. I still suspect that Wikipedians do pay more attention on average to featured articles than they do other articles. If that's true, then random vandalism is probably caught more quickly and reliably on featured pages. That is compatible with Lindsey's conclusion that the articles can still be poorly written, misleading, or just downright bad.

[ add comment ] ( 3811 views )   |  [ 0 trackbacks ]   |  permalink
How to be better at fraud 
We often assess claims based on plausibility of style and content. In writing about Wikipedia, I argue that these assessments can be frustrated by community editing. The implausible details can be taken out of false accounts, making the falsity harder to detect. Some people respond to my argument by denying that this happens.

Reading Eugenie Samuel Reich's Plastic Fantastic, I bumped into a similar phenomenon. Reich is a science journalist, and the book is about fraudulent science. Her claim is that peer review does not do an especially good job of catching deliberate fabrication. Moreover, scientists who perpetrate fraud often exploit reviewers' comments and questions in order to make their fabrications more plausible. Reich writes:
Not only in there no guarantee that a thorough review process will detect a false claim, but even more disturbingly, a thorough review may do little more than reveal to authors what changes they need to make in order to turn a false claim into a more plausible scam.[p. 122]
The parallel with Wikipedia is not precise, but in both cases conscientious but imperfect editorial oversight results in public versions which are more plausible false accounts than the original submissions.

Scammer scientists exploiting this can publish more plausible scam papers than they could have otherwise. Yet one might hope that, although this helps fraudulent papers on the timescale of months, fraudulent research programs will still be uncovered in the course of just a couple of years. The parallel hope for the Wikipedia is that false claims will be corrected eventually.

So the hope is that fraud burns brighter by exploiting peer review but will still burn out in relatively short order. Consistent with this is the fact that none of the cases of fraud which Reich describes have gone undetected for more than a few years, and each was discovered in time to ruin the scientists responsible. Yet that may just be because she can only report scientific fraud which was ultimately detected.


[ add comment ] ( 3636 views )   |  [ 0 trackbacks ]   |  permalink

| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next> Last>>