Duhem? I never even... 
I am teaching Poincaré and Duhem in seminar this week. They are both so sensible that reading them elicits a twinge of despair at how little progress has been made in philosophy of science since. They were ahead of their time, of course, and there have been some real advances. This is not a post about despair-- or about the bits that were so forward-looking-- so I'll move on.

In Chapter VII of The Aim and Structure of Physical Theory, Duhem claims that the modern dynamical definitions of force and power are not the meanings we get from common sense. We observe that a wagon does not move when there is no horse harnessed to it. With a horse, it moves when the horse pulls. It stops when the horse stops. Thus, Duhem suggests, common sense gives us a quasi-Aristotelean notion of force according to which force is exerted at each moment in which a thing moves. Bodies in motion tend to stop absent exertion.

The example is quaint, since we 21st-century philosophers rarely if ever see horses and wagons. The scepticism about common sense is also different than it would be for us. The 19th-century had been awash with common sense philosophy that was a kind of know-nothingism: a way for educated people to feel comfortable with their dogmas, delivered by common sense in a cellophane wrapper of critical thought.

Nevertheless, Duhem is not entirely dismissive of common sense. He offers a rather nice metaphor of common sense as a trust fund: "The fund of common sense is not a treasure buried in the soil to which no coin can ever come to be added; it is the capital of an enormous and prodigiously active association formed by the union of human minds" (p. 261). As he develops the metaphor, he suggests that anything that is appealed to as an timeless axiom of common sense is rather a withdrawl of a discovery made previously.* If the common sense claim proves unworkable, it may be replaced by a more profitable investment.

That seems right, if we conceive common sense primarily as a source of general dicta like definitions of power-- not the sort of thing we would find perfected in theories that were suited to daily life. Yet Duhem insists that common sense is right in its description of how wagons and horses typically behave. So, he concludes, "observations of common sense are certain to the extent and degree to which they deficient in detail and precision" (p. 264). They suffice as observations but fail as laws.

Since I have an axe to grind, let's distinguish three ways of understanding common sense:

1. Common sense is a reservoir of general principles: that such-and-so is the nature of power, that our senses are reliable, that all things are the work of god, and so on. This is the 19th-century, know-nothing common sense. Duhem advocates falliblism (perhaps even scepticism) about such dogmatic claims.

2. Common sense is a reservoir of particular judgements: that there is a wagon over there, that the wagons we have seen have stopped when the horse rests, and so on. Duhem calls such claims "true and certain", which seems to overstate things.

3. Common sense is a way of forming beliefs: we trust our senses unless their are specific reasons to think we are deceived, we trust our memory, and so on. Duhem does not discuss this explicitly, but it is a natural way to understand the source of the particular judgments that he thinks of as "true and certain."

And the axe that I'm grinding: Thomas Reid, that great source of the common sense tradition, is read too often as advocating 1, sometimes as advocating 2 (eg, by VanCleve), and when read correctly as advocating 3.


* William James argues similarly that what is now common sense was once a discovery. Basic truths, James suggests, were the inventions of genius cave men. The Pragmatism lectures were only a few years after Duhem's discussion appeared as articles, so I wonder if James was familiar with Duhem.

[ add comment ] ( 6291 views )   |  [ 0 trackbacks ]   |  permalink
Who put the we in the wikipedia? 
Ron alerted me to the existence of Wikipedia Scanner, a service that does the reverse lookup to follow anonymous Wikipedia edits back to their source. As one might expect, it has turned up a number cases in which corporations actively manipulated their own entries. You can get details from Wired-- or almost anywhere else, for that matter. It's already been on Colbert, making this post about two weeks behind the wave.

As someone sceptical about Wikipedia, I feel like I should have something to say about Wiki Scanner. For example, I could point out that doing reverse lookup on IP addresses only works so long as MegaHugeConglomoco does their wiki hatchet work from the corporate office. If they have it done by their PR firm or by employees telecommuting from their homes, Wiki Scanner won't make the connection. And surely any company playing Wikipedia chameleon now will make specific efforts to disguise their involvement.

All of that means that Wiki Scanner cannot generate any reliable statistics about how many organizations are manipulating Wikipedia, how many entries are manipulated, or how many manipulations remain in the present Wikipedia corpus. It sifts out anecdotes of specific abuses. Although illustrative, these anecdotes are about what one would have expected. Of course, it would be fun if it turned up some singular surprises.

Addendum Sep7


Mark alerts me to Wikipedia trust coloring, software designed to indicate trustworthiness of stretches of text in the Wikipedia. It does so by effectively making the entry history manifest in a single page. The presupposition is that text that has been in the Wikipedia through more edits is more likely to be true, or at least that users are more reliable if text they write persists through more edits.

As Mark puts it:
Now we know, in addition to <some guy said so>, that <some guy said so without other guys bothering to say not-so>.

Or better: to `some guy said so,' we add `no guy said no' (at least so said the log file); so we're better in the know.

No?
No. I mean... yes.

[ add comment ] ( 5899 views )   |  [ 0 trackbacks ]   |  permalink
Simulation 
The New York Times Science section recently ran this item on Nick Bostrom's Simulation Argument. It is an odd article, because the Science section usually touts recent or upcoming research. Bostrom's paper touting the simulation argument was in Phil Quarterly in 2003 and had been circulating on the web for a couple of years before that. Moreover, Bostrom has been promoting it as something important for most of this century. He has a website at simulation-argument.com with a simulation argument FAQ and the proviso, "I regret that I cannot usually respond to individual queries about the argument. However, I try to repond to reporters."*

At its heart, the argument is a twist on the standard brain-in-a-vat argument for scepticism. The usual argument points out that there is nothing in our immediate experience of the world to prove that the experiences are not fed to us by a system simulating such a world. Bostrom adds a twist by imagining who the mad scientists running the simulation might be.

Suppose that the human race lasts long enough to be able to run simulations which include people like us. If it does, then descendants of ours might run many similar simulations. There would thus be one actual historical 2007 and many simulated 2007s in which we might be living. Put a uniform probability distribution over them, and you get the conclusion that we are probably not in the actual 2007 but instead in one of the simulations.

As Bostrom notes, the argument really gives you a dilemma: Either future humans will not run so many simulations (because they die out, never develop the capability, or decide not to do it) or we are probably in a simulation.

OK, but what does this add to the evil demon worries that have been with us since the seventeenth-century? Instead of the mere possibility that I might be a brain in a vat, it is supposed to yield the high (conditional) probability that I am a brain in a vat. Yet the probability assessment requires thinking about how the world works, which I must do as informed by what I know about the world.

Either we have an answer to the traditional worry or we do not. If we do not, then the new argument is redundant. So suppose we do have an answer to the traditional worry. There are two kinds of answers we might think we have:

First, we might accept a reliabilist premise that our natural faculties are a reliable guide to the truth. If we unflinchingly accept that premise, then we believe already that we are not in a simulation.**

Second, we might trust our natural faculties without an explicit premise that they deliver the truth. Once we accept that standard of evidence, my seeing the world is enough of a ground for me to believe in it. The simulation argument requires that trust to get started and so comes along too late to undercut it. To paraphrase Thomas Reid, starting with trust won't get you a sceptical conclusion.

Suppose, contrary to all that, that the argument leaves me mired in scepticism. I can imagine a great many creatures who might do simulations of creatures like us. I can also imagine creatures that would do simulations of creatures like them. Computational constraints don't put the brakes on this speculation, because powerful gods might want to simulate worlds more constrained than their own; perhaps the computational constraints we know are just be features of our world as simulated. There is no sensible way to put a probability distribution over these possibilities. In the Times article, Bostrom is quoted as saying: "My gut feeling, and it's nothing more than that, is that there's a 20 percent chance we're living in a computer simulation." I have no gut feeling on the subject, because I can't make sense of 'chance' here at all.

Apart from the merits of the argument, the story in the Times is a bit disconcerting. It just encourages the all too popular conception of philosophers as purveyors of headtrips and wacky sophisms. But wouldn't I return the call if they wanted to do a story on some wacky sophism of mine? Perhaps I could feign interest.


* The argument has also gotten attention from philosophers; see, inter alia Brian Weatherson's blogging on the subject.

** David Chalmers has argued that simulation is not a sceptical possibility, but simply an alternate metaphysics. If we are in a simulation, then everything we know about tables, chairs, dogs, ducks, and the rest of the world is true; it's just that those things are (considered fundamentally) part of simulation just as we are.

[ add comment ] ( 2557 views )   |  [ 0 trackbacks ]   |  permalink
All the chimps give a shout out to Benedict 
Speaking recently before a bevy of priests, Pope Benedict is reported to have claimed (in effect) that creationism is bunk. In this story, he is quoted as saying that "there is much scientific proof in favour of evolution, which appears as a reality that we must see and which enriches our understanding of life and being as such."

Yet (as befits a Pope) he still thinks that God fits in somewhere: Evolution "does not answer the great philosophical question 'where does everything come from?'" This suggests a potentially unstable compromise position wherein evolution provides the story of how events unfolded and religion provides the story of why.

Regardless, the story (I think) misunderstands the Pope's remarks. It adds, not quoting the Pope: "His comments appear to be an endorsement of the doctrine of intelligent design." Yet, proponents of ID claim that it is a scientific rival to evolution, an alternative story of what happened. The Pope does not seem on board with such claptrap. He confines theology to the philosophical domain, which is incompatible with the IDists demand for counter-evolutionary teaching in biology classrooms.

And the Pope wants mass to be offered in Latin more often.

[ add comment ] ( 3208 views )   |  [ 0 trackbacks ]   |  permalink
Whinging about conditionalization 
Subjective Bayesianism as it is often employed in philosophy of science consists of three commitments:
PSYCH (the psychological bit) An agent's degrees of belief can be represented as a real number for each proposition of the language.

SYNCH (the synchronic bit) An agent's degrees of belief at a time ought to obey the axioms of probability.

DIACH (the diachronic bit) An agent's degrees of belief should be updated over time by conditionalization.

As an example of DIACH, suppose that P1 is the probability function representing your beliefs before learning some evidence E and that P2 is the function afterwards. After learning E, you believe it; so P2(E)=1. For another hypothesis H, you should change your degree of belief in H to your prior degree of belief in H given E; that is P2(E)=P1(H|E). There is a general probability kinematics for cases in which your learning changes your degree of belief in but does not make you certain of E; often it's called Jeffrey conditionalization.


Colin Howson and Peter Urbach, in ch 6 of Scientific Reasoning, argue that violating SYNCH makes one inconsistent but that violating DIACH does not. They argue by constructing a case in which you are imagined to consistently violate DIACH. I'll summarize a streamlined version of the case before whinging about their argument.

Let P1, P2 be your successive degrees of belief. You believe some claim H for legitimate reasons: P1(H)=1. You suspect, however, that you have a brain lesion such that you will be less confident of H later on. Let E be the propostion 'P2(H)=1/2'. You suspect now that, because of the brain lesion, E will be true. Yet you think that E does not indicate any legitimate reason to doubt H. It will just be because you are overcome by vapors of black bile. As such, P1(H|E)=1. That is, you are presently confident of H even supposing that E turns out to be true (and you later lose confidence in H.)

Now the brain lesion does its work, and P2 is your new credence function. You are now uncertain of H: P2(H)=1/2. This is just the state of affairs represented by E, and you are aware of it, so P2(E)=1. If you kept your conditional probabilities fixed, as DIACH demands, then P1(H|E)=P2(H|E)=1. Yet it follows from the other values and rules of probability that P2(H|E)=1/2, so DIACH leads to a violation of SYNCH. Violating SYNCH would be inconsistent, so consistency demands violating DIACH.

That's the argument.

The brain lesion in this example seems like too much of a philosophers' contrivance, but I'll let that slide for a moment. Note, however, that the lesion makes it impossible to obey DIACH at all in this case. Given that you have prior P1(H|E)=1 and that you learn E, you should have posterior P2(H)=1. The lesion stops you from drawing that conclusion.

You can still obey SYNCH by adjusting P2(H|E)=1/2, but that does not seem like much of a victory. You would remain consistent, and so in that limited sense rational, but you would still be apportioning your belief in a vicious way. Your organic condition would have condemned you to a kind of irrationality, even if not inconsistency, and violation of DIACH would be symptomatic.

Moreover, there is a kind of legerdemain involved in conditionalizing on your present degrees of belief. As Richard Moran has argued, there is an important difference between third-person ascription (judging whether Steve believes H, for example) and first-person ascription (judging whether you believe H). The former involves considering Steve's behavior. The latter involves considering the evidence for and against H. You can ask the former question about yourself up until now. You ask the latter when you deliberate whether you now and henceforth shall believe H.

In the case given above, is your deliberation of the third-person or the first-person kind?

If it is third personal, then you must conclude that P2(H|E)=1/2. All of your behavior will indicate that, because it indicates P2(H)=1/2 and P2(E)=1. But, from the third-person standpoint, one must conclude that this configuration of belief is the irrational result of a bad brain.

If it is first personal, then it is nonsense to represent your reflection in terms of P2(H|E). E is itself a claim about P2. You must ask yourself, instead, that evidence suggests that H could be concluded from E. In effect, you are deliberating on what P3(H|E) ought to be. It is unclear how this deliberation would or should go, because the gedanken lesion is so underspecified that we don't know how or even if it constrains P3.

The subjectivist might object that it is spurious to call the violation of DIACH in this case irrational, because there is no bell that goes off telling you that your change of belief is vicious. Yet the subjective Bayesian typically does not specify which belief changes count as observations. If we consider purely your first-person point of view and treat DIACH as a rational constraint, then your spontaneous change from believing H P(H)=1 to not believing it P(H)=1/2 just is the learning that happens in this case. You ought to conditionalize on this new piece of evidence, using the full probability kinematics.

(Actually, the usual framework doesn't allow you to renege on beliefs once they are set to probability 1. But that is incidental to the point here. The case will suffice for H&U's argument, if at all, supposing any value for P1(H) that is distinct from P2(H).)

[ add comment ] ( 5706 views )   |  [ 0 trackbacks ]   |  permalink

<<First <Back | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | Next> Last>>