Who put the we in the wikipedia? 
Ron alerted me to the existence of Wikipedia Scanner, a service that does the reverse lookup to follow anonymous Wikipedia edits back to their source. As one might expect, it has turned up a number cases in which corporations actively manipulated their own entries. You can get details from Wired-- or almost anywhere else, for that matter. It's already been on Colbert, making this post about two weeks behind the wave.

As someone sceptical about Wikipedia, I feel like I should have something to say about Wiki Scanner. For example, I could point out that doing reverse lookup on IP addresses only works so long as MegaHugeConglomoco does their wiki hatchet work from the corporate office. If they have it done by their PR firm or by employees telecommuting from their homes, Wiki Scanner won't make the connection. And surely any company playing Wikipedia chameleon now will make specific efforts to disguise their involvement.

All of that means that Wiki Scanner cannot generate any reliable statistics about how many organizations are manipulating Wikipedia, how many entries are manipulated, or how many manipulations remain in the present Wikipedia corpus. It sifts out anecdotes of specific abuses. Although illustrative, these anecdotes are about what one would have expected. Of course, it would be fun if it turned up some singular surprises.

Addendum Sep7

Mark alerts me to Wikipedia trust coloring, software designed to indicate trustworthiness of stretches of text in the Wikipedia. It does so by effectively making the entry history manifest in a single page. The presupposition is that text that has been in the Wikipedia through more edits is more likely to be true, or at least that users are more reliable if text they write persists through more edits.

As Mark puts it:
Now we know, in addition to <some guy said so>, that <some guy said so without other guys bothering to say not-so>.

Or better: to `some guy said so,' we add `no guy said no' (at least so said the log file); so we're better in the know.

No. I mean... yes.

[ add comment ] ( 8145 views )   |  [ 0 trackbacks ]   |  permalink
The New York Times Science section recently ran this item on Nick Bostrom's Simulation Argument. It is an odd article, because the Science section usually touts recent or upcoming research. Bostrom's paper touting the simulation argument was in Phil Quarterly in 2003 and had been circulating on the web for a couple of years before that. Moreover, Bostrom has been promoting it as something important for most of this century. He has a website at simulation-argument.com with a simulation argument FAQ and the proviso, "I regret that I cannot usually respond to individual queries about the argument. However, I try to repond to reporters."*

At its heart, the argument is a twist on the standard brain-in-a-vat argument for scepticism. The usual argument points out that there is nothing in our immediate experience of the world to prove that the experiences are not fed to us by a system simulating such a world. Bostrom adds a twist by imagining who the mad scientists running the simulation might be.

Suppose that the human race lasts long enough to be able to run simulations which include people like us. If it does, then descendants of ours might run many similar simulations. There would thus be one actual historical 2007 and many simulated 2007s in which we might be living. Put a uniform probability distribution over them, and you get the conclusion that we are probably not in the actual 2007 but instead in one of the simulations.

As Bostrom notes, the argument really gives you a dilemma: Either future humans will not run so many simulations (because they die out, never develop the capability, or decide not to do it) or we are probably in a simulation.

OK, but what does this add to the evil demon worries that have been with us since the seventeenth-century? Instead of the mere possibility that I might be a brain in a vat, it is supposed to yield the high (conditional) probability that I am a brain in a vat. Yet the probability assessment requires thinking about how the world works, which I must do as informed by what I know about the world.

Either we have an answer to the traditional worry or we do not. If we do not, then the new argument is redundant. So suppose we do have an answer to the traditional worry. There are two kinds of answers we might think we have:

First, we might accept a reliabilist premise that our natural faculties are a reliable guide to the truth. If we unflinchingly accept that premise, then we believe already that we are not in a simulation.**

Second, we might trust our natural faculties without an explicit premise that they deliver the truth. Once we accept that standard of evidence, my seeing the world is enough of a ground for me to believe in it. The simulation argument requires that trust to get started and so comes along too late to undercut it. To paraphrase Thomas Reid, starting with trust won't get you a sceptical conclusion.

Suppose, contrary to all that, that the argument leaves me mired in scepticism. I can imagine a great many creatures who might do simulations of creatures like us. I can also imagine creatures that would do simulations of creatures like them. Computational constraints don't put the brakes on this speculation, because powerful gods might want to simulate worlds more constrained than their own; perhaps the computational constraints we know are just be features of our world as simulated. There is no sensible way to put a probability distribution over these possibilities. In the Times article, Bostrom is quoted as saying: "My gut feeling, and it's nothing more than that, is that there's a 20 percent chance we're living in a computer simulation." I have no gut feeling on the subject, because I can't make sense of 'chance' here at all.

Apart from the merits of the argument, the story in the Times is a bit disconcerting. It just encourages the all too popular conception of philosophers as purveyors of headtrips and wacky sophisms. But wouldn't I return the call if they wanted to do a story on some wacky sophism of mine? Perhaps I could feign interest.

* The argument has also gotten attention from philosophers; see, inter alia Brian Weatherson's blogging on the subject.

** David Chalmers has argued that simulation is not a sceptical possibility, but simply an alternate metaphysics. If we are in a simulation, then everything we know about tables, chairs, dogs, ducks, and the rest of the world is true; it's just that those things are (considered fundamentally) part of simulation just as we are.

[ add comment ] ( 4949 views )   |  [ 0 trackbacks ]   |  permalink
All the chimps give a shout out to Benedict 
Speaking recently before a bevy of priests, Pope Benedict is reported to have claimed (in effect) that creationism is bunk. In this story, he is quoted as saying that "there is much scientific proof in favour of evolution, which appears as a reality that we must see and which enriches our understanding of life and being as such."

Yet (as befits a Pope) he still thinks that God fits in somewhere: Evolution "does not answer the great philosophical question 'where does everything come from?'" This suggests a potentially unstable compromise position wherein evolution provides the story of how events unfolded and religion provides the story of why.

Regardless, the story (I think) misunderstands the Pope's remarks. It adds, not quoting the Pope: "His comments appear to be an endorsement of the doctrine of intelligent design." Yet, proponents of ID claim that it is a scientific rival to evolution, an alternative story of what happened. The Pope does not seem on board with such claptrap. He confines theology to the philosophical domain, which is incompatible with the IDists demand for counter-evolutionary teaching in biology classrooms.

And the Pope wants mass to be offered in Latin more often.

[ add comment ] ( 5257 views )   |  [ 0 trackbacks ]   |  permalink
Whinging about conditionalization 
Subjective Bayesianism as it is often employed in philosophy of science consists of three commitments:
PSYCH (the psychological bit) An agent's degrees of belief can be represented as a real number for each proposition of the language.

SYNCH (the synchronic bit) An agent's degrees of belief at a time ought to obey the axioms of probability.

DIACH (the diachronic bit) An agent's degrees of belief should be updated over time by conditionalization.

As an example of DIACH, suppose that P1 is the probability function representing your beliefs before learning some evidence E and that P2 is the function afterwards. After learning E, you believe it; so P2(E)=1. For another hypothesis H, you should change your degree of belief in H to your prior degree of belief in H given E; that is P2(E)=P1(H|E). There is a general probability kinematics for cases in which your learning changes your degree of belief in but does not make you certain of E; often it's called Jeffrey conditionalization.

Colin Howson and Peter Urbach, in ch 6 of Scientific Reasoning, argue that violating SYNCH makes one inconsistent but that violating DIACH does not. They argue by constructing a case in which you are imagined to consistently violate DIACH. I'll summarize a streamlined version of the case before whinging about their argument.

Let P1, P2 be your successive degrees of belief. You believe some claim H for legitimate reasons: P1(H)=1. You suspect, however, that you have a brain lesion such that you will be less confident of H later on. Let E be the propostion 'P2(H)=1/2'. You suspect now that, because of the brain lesion, E will be true. Yet you think that E does not indicate any legitimate reason to doubt H. It will just be because you are overcome by vapors of black bile. As such, P1(H|E)=1. That is, you are presently confident of H even supposing that E turns out to be true (and you later lose confidence in H.)

Now the brain lesion does its work, and P2 is your new credence function. You are now uncertain of H: P2(H)=1/2. This is just the state of affairs represented by E, and you are aware of it, so P2(E)=1. If you kept your conditional probabilities fixed, as DIACH demands, then P1(H|E)=P2(H|E)=1. Yet it follows from the other values and rules of probability that P2(H|E)=1/2, so DIACH leads to a violation of SYNCH. Violating SYNCH would be inconsistent, so consistency demands violating DIACH.

That's the argument.

The brain lesion in this example seems like too much of a philosophers' contrivance, but I'll let that slide for a moment. Note, however, that the lesion makes it impossible to obey DIACH at all in this case. Given that you have prior P1(H|E)=1 and that you learn E, you should have posterior P2(H)=1. The lesion stops you from drawing that conclusion.

You can still obey SYNCH by adjusting P2(H|E)=1/2, but that does not seem like much of a victory. You would remain consistent, and so in that limited sense rational, but you would still be apportioning your belief in a vicious way. Your organic condition would have condemned you to a kind of irrationality, even if not inconsistency, and violation of DIACH would be symptomatic.

Moreover, there is a kind of legerdemain involved in conditionalizing on your present degrees of belief. As Richard Moran has argued, there is an important difference between third-person ascription (judging whether Steve believes H, for example) and first-person ascription (judging whether you believe H). The former involves considering Steve's behavior. The latter involves considering the evidence for and against H. You can ask the former question about yourself up until now. You ask the latter when you deliberate whether you now and henceforth shall believe H.

In the case given above, is your deliberation of the third-person or the first-person kind?

If it is third personal, then you must conclude that P2(H|E)=1/2. All of your behavior will indicate that, because it indicates P2(H)=1/2 and P2(E)=1. But, from the third-person standpoint, one must conclude that this configuration of belief is the irrational result of a bad brain.

If it is first personal, then it is nonsense to represent your reflection in terms of P2(H|E). E is itself a claim about P2. You must ask yourself, instead, that evidence suggests that H could be concluded from E. In effect, you are deliberating on what P3(H|E) ought to be. It is unclear how this deliberation would or should go, because the gedanken lesion is so underspecified that we don't know how or even if it constrains P3.

The subjectivist might object that it is spurious to call the violation of DIACH in this case irrational, because there is no bell that goes off telling you that your change of belief is vicious. Yet the subjective Bayesian typically does not specify which belief changes count as observations. If we consider purely your first-person point of view and treat DIACH as a rational constraint, then your spontaneous change from believing H P(H)=1 to not believing it P(H)=1/2 just is the learning that happens in this case. You ought to conditionalize on this new piece of evidence, using the full probability kinematics.

(Actually, the usual framework doesn't allow you to renege on beliefs once they are set to probability 1. But that is incidental to the point here. The case will suffice for H&U's argument, if at all, supposing any value for P1(H) that is distinct from P2(H).)

[ add comment ] ( 8470 views )   |  [ 0 trackbacks ]   |  permalink
The world is full of strata 
As Greg noted recently, there are no real measures of scholarly impact for philosophy journals. The blog Brains links to a recent effort by the European Science Foundation to provide such a measure. (I encountered the Brains entry via Brian Leiter's blog.) Various journals in philosophy and science studies are ranked A, B, and C. These lists are meant to represent the exposure and stature of the journals.

The lists are available as PDFs: philosophy and HPS.

The ESF FAQ offers several caveats: These are not intended to be rankings of journal quality. C ranked journals might still be quite influential within a region or scholarly niche. The rankings may be used to judge programs or institutions, but should not be used to judge individual scholars.

One wonders whether people will mind these caveats, however. Especially to an American, A, B, and C look like grades of quality. (Although I know that students are given numbers instead of letter grades in parts of Europe, I'm not sure whether letter grades are an exclusively American affectation.) Regardless, there is a tendency to overinterpret rankings when there are no other rankings available.

As an analogy, consider Leiter's Philosophical Gourmet Report. It is an influential ranking of graduate departments, but it is specifically a ranking of the research stature of the faculty within such departments. Nevertheless, it is used much more broadly than that-- largely because there is no comparable way of explicitly comparing graduate programs or philosophy departments.

The methodology of the Gourmet Report has been revised in recent iterations, and I will grant for the sake of discussion that it is now a decent instrument for measuring what it claims to measure. However, its influence was waxing even before its methodology had been honed. And even an accurate instrument can be used incorrectly. Consider some examples. (1) The tendency to take the rankings as judgments of department quality may lead job candidates to treat any ranked department as being better than an unranked department. A job at a first-rate liberal arts college might still have much to recommend it over a job at a school near the bottom of the list, but liberal arts colleges are not even eligible for the list. (2) It is an all too common fallacy to judge a philosopher by the prestige of their institution rather than on the basis of their own work. This does not require explicit rankings, but it is perhaps abetted by them.

Leiter offers such caveats, of course-- just as the FAQ for the ESF journal ratings explains that they are not ratings of quality. Yet a straight-forward rating is an appealing thing. Once we've got one, especially when there is no other instrument at hand, it is tempting to use it too widely.

Once you've got a hammer, the world is full of nails. Once you have a ranking instrument, the world is full of strata.

[ add comment ] ( 7474 views )   |  [ 0 trackbacks ]   |  permalink

<<First <Back | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | Next> Last>>