The New York Times Science section recently ran this item on Nick Bostrom's Simulation Argument. It is an odd article, because the Science section usually touts recent or upcoming research. Bostrom's paper touting the simulation argument was in Phil Quarterly in 2003 and had been circulating on the web for a couple of years before that. Moreover, Bostrom has been promoting it as something important for most of this century. He has a website at with a simulation argument FAQ and the proviso, "I regret that I cannot usually respond to individual queries about the argument. However, I try to repond to reporters."*

At its heart, the argument is a twist on the standard brain-in-a-vat argument for scepticism. The usual argument points out that there is nothing in our immediate experience of the world to prove that the experiences are not fed to us by a system simulating such a world. Bostrom adds a twist by imagining who the mad scientists running the simulation might be.

Suppose that the human race lasts long enough to be able to run simulations which include people like us. If it does, then descendants of ours might run many similar simulations. There would thus be one actual historical 2007 and many simulated 2007s in which we might be living. Put a uniform probability distribution over them, and you get the conclusion that we are probably not in the actual 2007 but instead in one of the simulations.

As Bostrom notes, the argument really gives you a dilemma: Either future humans will not run so many simulations (because they die out, never develop the capability, or decide not to do it) or we are probably in a simulation.

OK, but what does this add to the evil demon worries that have been with us since the seventeenth-century? Instead of the mere possibility that I might be a brain in a vat, it is supposed to yield the high (conditional) probability that I am a brain in a vat. Yet the probability assessment requires thinking about how the world works, which I must do as informed by what I know about the world.

Either we have an answer to the traditional worry or we do not. If we do not, then the new argument is redundant. So suppose we do have an answer to the traditional worry. There are two kinds of answers we might think we have:

First, we might accept a reliabilist premise that our natural faculties are a reliable guide to the truth. If we unflinchingly accept that premise, then we believe already that we are not in a simulation.**

Second, we might trust our natural faculties without an explicit premise that they deliver the truth. Once we accept that standard of evidence, my seeing the world is enough of a ground for me to believe in it. The simulation argument requires that trust to get started and so comes along too late to undercut it. To paraphrase Thomas Reid, starting with trust won't get you a sceptical conclusion.

Suppose, contrary to all that, that the argument leaves me mired in scepticism. I can imagine a great many creatures who might do simulations of creatures like us. I can also imagine creatures that would do simulations of creatures like them. Computational constraints don't put the brakes on this speculation, because powerful gods might want to simulate worlds more constrained than their own; perhaps the computational constraints we know are just be features of our world as simulated. There is no sensible way to put a probability distribution over these possibilities. In the Times article, Bostrom is quoted as saying: "My gut feeling, and it's nothing more than that, is that there's a 20 percent chance we're living in a computer simulation." I have no gut feeling on the subject, because I can't make sense of 'chance' here at all.

Apart from the merits of the argument, the story in the Times is a bit disconcerting. It just encourages the all too popular conception of philosophers as purveyors of headtrips and wacky sophisms. But wouldn't I return the call if they wanted to do a story on some wacky sophism of mine? Perhaps I could feign interest.

* The argument has also gotten attention from philosophers; see, inter alia Brian Weatherson's blogging on the subject.

** David Chalmers has argued that simulation is not a sceptical possibility, but simply an alternate metaphysics. If we are in a simulation, then everything we know about tables, chairs, dogs, ducks, and the rest of the world is true; it's just that those things are (considered fundamentally) part of simulation just as we are.

[ add comment ] ( 4967 views )   |  [ 0 trackbacks ]   |  permalink
All the chimps give a shout out to Benedict 
Speaking recently before a bevy of priests, Pope Benedict is reported to have claimed (in effect) that creationism is bunk. In this story, he is quoted as saying that "there is much scientific proof in favour of evolution, which appears as a reality that we must see and which enriches our understanding of life and being as such."

Yet (as befits a Pope) he still thinks that God fits in somewhere: Evolution "does not answer the great philosophical question 'where does everything come from?'" This suggests a potentially unstable compromise position wherein evolution provides the story of how events unfolded and religion provides the story of why.

Regardless, the story (I think) misunderstands the Pope's remarks. It adds, not quoting the Pope: "His comments appear to be an endorsement of the doctrine of intelligent design." Yet, proponents of ID claim that it is a scientific rival to evolution, an alternative story of what happened. The Pope does not seem on board with such claptrap. He confines theology to the philosophical domain, which is incompatible with the IDists demand for counter-evolutionary teaching in biology classrooms.

And the Pope wants mass to be offered in Latin more often.

[ add comment ] ( 5276 views )   |  [ 0 trackbacks ]   |  permalink
Whinging about conditionalization 
Subjective Bayesianism as it is often employed in philosophy of science consists of three commitments:
PSYCH (the psychological bit) An agent's degrees of belief can be represented as a real number for each proposition of the language.

SYNCH (the synchronic bit) An agent's degrees of belief at a time ought to obey the axioms of probability.

DIACH (the diachronic bit) An agent's degrees of belief should be updated over time by conditionalization.

As an example of DIACH, suppose that P1 is the probability function representing your beliefs before learning some evidence E and that P2 is the function afterwards. After learning E, you believe it; so P2(E)=1. For another hypothesis H, you should change your degree of belief in H to your prior degree of belief in H given E; that is P2(E)=P1(H|E). There is a general probability kinematics for cases in which your learning changes your degree of belief in but does not make you certain of E; often it's called Jeffrey conditionalization.

Colin Howson and Peter Urbach, in ch 6 of Scientific Reasoning, argue that violating SYNCH makes one inconsistent but that violating DIACH does not. They argue by constructing a case in which you are imagined to consistently violate DIACH. I'll summarize a streamlined version of the case before whinging about their argument.

Let P1, P2 be your successive degrees of belief. You believe some claim H for legitimate reasons: P1(H)=1. You suspect, however, that you have a brain lesion such that you will be less confident of H later on. Let E be the propostion 'P2(H)=1/2'. You suspect now that, because of the brain lesion, E will be true. Yet you think that E does not indicate any legitimate reason to doubt H. It will just be because you are overcome by vapors of black bile. As such, P1(H|E)=1. That is, you are presently confident of H even supposing that E turns out to be true (and you later lose confidence in H.)

Now the brain lesion does its work, and P2 is your new credence function. You are now uncertain of H: P2(H)=1/2. This is just the state of affairs represented by E, and you are aware of it, so P2(E)=1. If you kept your conditional probabilities fixed, as DIACH demands, then P1(H|E)=P2(H|E)=1. Yet it follows from the other values and rules of probability that P2(H|E)=1/2, so DIACH leads to a violation of SYNCH. Violating SYNCH would be inconsistent, so consistency demands violating DIACH.

That's the argument.

The brain lesion in this example seems like too much of a philosophers' contrivance, but I'll let that slide for a moment. Note, however, that the lesion makes it impossible to obey DIACH at all in this case. Given that you have prior P1(H|E)=1 and that you learn E, you should have posterior P2(H)=1. The lesion stops you from drawing that conclusion.

You can still obey SYNCH by adjusting P2(H|E)=1/2, but that does not seem like much of a victory. You would remain consistent, and so in that limited sense rational, but you would still be apportioning your belief in a vicious way. Your organic condition would have condemned you to a kind of irrationality, even if not inconsistency, and violation of DIACH would be symptomatic.

Moreover, there is a kind of legerdemain involved in conditionalizing on your present degrees of belief. As Richard Moran has argued, there is an important difference between third-person ascription (judging whether Steve believes H, for example) and first-person ascription (judging whether you believe H). The former involves considering Steve's behavior. The latter involves considering the evidence for and against H. You can ask the former question about yourself up until now. You ask the latter when you deliberate whether you now and henceforth shall believe H.

In the case given above, is your deliberation of the third-person or the first-person kind?

If it is third personal, then you must conclude that P2(H|E)=1/2. All of your behavior will indicate that, because it indicates P2(H)=1/2 and P2(E)=1. But, from the third-person standpoint, one must conclude that this configuration of belief is the irrational result of a bad brain.

If it is first personal, then it is nonsense to represent your reflection in terms of P2(H|E). E is itself a claim about P2. You must ask yourself, instead, that evidence suggests that H could be concluded from E. In effect, you are deliberating on what P3(H|E) ought to be. It is unclear how this deliberation would or should go, because the gedanken lesion is so underspecified that we don't know how or even if it constrains P3.

The subjectivist might object that it is spurious to call the violation of DIACH in this case irrational, because there is no bell that goes off telling you that your change of belief is vicious. Yet the subjective Bayesian typically does not specify which belief changes count as observations. If we consider purely your first-person point of view and treat DIACH as a rational constraint, then your spontaneous change from believing H P(H)=1 to not believing it P(H)=1/2 just is the learning that happens in this case. You ought to conditionalize on this new piece of evidence, using the full probability kinematics.

(Actually, the usual framework doesn't allow you to renege on beliefs once they are set to probability 1. But that is incidental to the point here. The case will suffice for H&U's argument, if at all, supposing any value for P1(H) that is distinct from P2(H).)

[ add comment ] ( 8492 views )   |  [ 0 trackbacks ]   |  permalink
The world is full of strata 
As Greg noted recently, there are no real measures of scholarly impact for philosophy journals. The blog Brains links to a recent effort by the European Science Foundation to provide such a measure. (I encountered the Brains entry via Brian Leiter's blog.) Various journals in philosophy and science studies are ranked A, B, and C. These lists are meant to represent the exposure and stature of the journals.

The lists are available as PDFs: philosophy and HPS.

The ESF FAQ offers several caveats: These are not intended to be rankings of journal quality. C ranked journals might still be quite influential within a region or scholarly niche. The rankings may be used to judge programs or institutions, but should not be used to judge individual scholars.

One wonders whether people will mind these caveats, however. Especially to an American, A, B, and C look like grades of quality. (Although I know that students are given numbers instead of letter grades in parts of Europe, I'm not sure whether letter grades are an exclusively American affectation.) Regardless, there is a tendency to overinterpret rankings when there are no other rankings available.

As an analogy, consider Leiter's Philosophical Gourmet Report. It is an influential ranking of graduate departments, but it is specifically a ranking of the research stature of the faculty within such departments. Nevertheless, it is used much more broadly than that-- largely because there is no comparable way of explicitly comparing graduate programs or philosophy departments.

The methodology of the Gourmet Report has been revised in recent iterations, and I will grant for the sake of discussion that it is now a decent instrument for measuring what it claims to measure. However, its influence was waxing even before its methodology had been honed. And even an accurate instrument can be used incorrectly. Consider some examples. (1) The tendency to take the rankings as judgments of department quality may lead job candidates to treat any ranked department as being better than an unranked department. A job at a first-rate liberal arts college might still have much to recommend it over a job at a school near the bottom of the list, but liberal arts colleges are not even eligible for the list. (2) It is an all too common fallacy to judge a philosopher by the prestige of their institution rather than on the basis of their own work. This does not require explicit rankings, but it is perhaps abetted by them.

Leiter offers such caveats, of course-- just as the FAQ for the ESF journal ratings explains that they are not ratings of quality. Yet a straight-forward rating is an appealing thing. Once we've got one, especially when there is no other instrument at hand, it is tempting to use it too widely.

Once you've got a hammer, the world is full of nails. Once you have a ranking instrument, the world is full of strata.

[ add comment ] ( 7493 views )   |  [ 0 trackbacks ]   |  permalink
Ruminations on fecundity 
Philosophers of science who argue over the virtues of theories typically concentrate on fit with observation, novel prediction, support for intervention, explanation, and unification. For each, there are arguments that it is truth-indicative, that it is not, that it marks a theory worth accepting, that it does not, and so on. Philosophers have had less to say about fecundity, the virtue a theory has when it gives us some sense of what to do next in enquiry. Obviously, scientists can make no use whatsoever of a theory that gives them no sense of what they might do next. The remainder of this post provides some almost connected ruminations on the subject.

Lakatos' Methodology of Scientific Research Programmes is a notable exception to philosophers' neglect of fecundity. (I have the affectation of using Lakatos' British spelling of "programme" to distinguish research programmes in a Lakatosian sense from more quotidian programs.)

Consider, as an example, research into the influence of hormones on development; what Helen Longino dubs the linear-hormonal model (LH). The programme "can continue to generate studies that are used to support microhypotheses about the etiology of particular forms of behavior that are consistent with [its] broader model." (For references, see Hormone Research as an Exemplar of Underdetermination.) Study after study can observe a correlation between prenatal hormone levels and gender-linked juvenile behavior. Different populations and data sets may be used, and so in some sense the programme suggests further research. Yet the programme dwells on observations of superficial phenomena, accumulating similar results insufficient to underwrite practitioners' claims about brain development. The programme's further direction doesn't seem to get anywhere.

In Lakatos' jargon, a research programme is theoretically degenerating if it does not yield any new testable predictions and empirically degenerating if its predictions turn out to be false.

One might defend the LH programme: Its research does make predictions and those predictions are often true, so it is both theoretically and empirically progressive (i.e., not degenerating.)

However, the LH programme only makes the conservative prediction that an observed correlation between two variables will continue in further instances. The predictions of a progressive research programme should be novel and unexpected, so LH is either somewhat degenerating or only minimally progressive (depending on how you want to spin it.) Unfortunately, surprisingness is most naturally treated as a psychological notion. This would lead us to say that jumpy, unimaginative scientists (who are surprised by banal predictions) will be judged to enjoy more progress than imperturbable scientists (who are surprised by nothing.)

The LH programme offers a kind of pathological fecundity, but are there any scientific research programmes that lack even that sliver of the virtue?

A clear example, I think, is so-called Intelligent Design theory. It is deliberately constructed so as to be incompatible with the research programme of evolutionary biology but to stop short of actually describing the alleged designer. It makes some predictions, perhaps, but not ones that can guide any sort of research programme.

A search of the blog reminds me that I have discussed these issues before, a propos of demarcation and pragmatism. As a slight tangent, I think this is an advantage of having a blog: If I had merely thought those things, I would have forgotten irretrievably. If I had written notes to myself on scraps of paper, they would either be buried in a file cabinet or thrown away long ago. Moreover (as I pointed out in the last post) notes to myself would have been more elliptical than the blog post. Even if I had exhumed them, it might have required some effort even for me to reconstruct what I had meant.

I am not certain how to draw these strands together, but during last Summer's reading group I discovered this relevant passage in Dewey's Logic:
The history of science, as an exemplification of the method of inquiry, shows that the verifiability (as positivism understands it) of hypotheses is not nearly as important as is their directive power. As a broad statement, no important scientific hypothesis has ever been verified in the form in which it was originally presented nor without very considerable revisions and modifications. The justification of such hypotheses has lain in their power to direct new orders of experimental observation and to open up new problems and fields of subject matter. In doing these things, they have not only provided new facts but have often radically altered what were previously taken to be facts. [p. 519]
To my knowledge, Dewey's point here was not directly influential on later philosophers and historians who said similar things. Thomas Kuhn (who listed fecundity among the scientific values) and Martin Rudwick (who has often emphasized the fact that a victorious theory typically arises from attempts to develop earlier theories) probably had not read this passage in Dewey. (This is a safe bet, because almost no one read Dewey's Logic.)

As a final note, I should say that the word 'fecundity' has a strange resonance for me. My homepage and e-mail are hosted at, a domain I have owned since 1999. I bought hosting and my own domain just so that I could toy with writing CGIs, an activity forbidden on the university servers. I had first encountered the word 'fecundity' years before when reading Jeremy Bentham. This gave it a strange association with the internet, since the live webcam of Jeremy Bentham's mummified body was one of the cool things on the net back then. The word has a nice sound to it, and the connection motivated one of the first banner ads I made for the site:

[ add comment ] ( 8632 views )   |  [ 0 trackbacks ]   |  permalink

<<First <Back | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | Next> Last>>