Cristyn is working on a sound installation inspired by the study of Neanderthals, and a couple of days ago we had a conversation that involved posing this scenario:
Imagine Homo sapiens had died out, and the Neanderthals had lived on to develop an advanced technological civilization. They would be curious as to what the world would have been like had they died out. So a sufficiently advanced Neanderthal civilization would run simulations in which we modern humans developed and lived our lives.
The lack of a satisfactory explanation for why Neanderthals died out is just what we would expect if our world were such a simulation. The Neanderthal high programmer would simply write in the removal of their Neanderthal forebears. Deus ex machina.
To jump to a conclusion: We have no direct way of knowing if our world is a simulation or not. As such, we have good reason to believe that either (a) we are living in a Neanderthal's simulation of a world, or (b) there is a complete naturalistic explanation of why Neanderthal's died out. The radical contingency of natural selection means that we would never have decisive reason to believe the latter rather than the former.
Of course, this is a variation of the Simulation Argument. However, it ups the stakes a bit: The simulation argument admits that our modern human civilization developed, but asks if we are in it or in a simulation of it. The new argument leaves us with the possibility that there never was really a civilization like ours. Perhaps Napolean, the Sears tower, the TV show Star Trek, and the moon landing were never anything more than an idle-time process on a powerful computer run by a caffeine-addled trans-neanderthal IT guy.
If we take this scenario seriously, there might be ethical consequences. For example, we might wish humans to exist in the physical world and not just in so many simulations. The trans-neanderthal programmer probably have the technology to embody humans, based on the simulations of what we would be like. They will only do so if we seem like the sort of creature actually worth harvesting, rather than a curiosity only worth simulating. Unfortunately, it is unclear what they might want us to do. Perhaps we should perfect our skills at making espresso, so that they might embody us to work in their coffee shops.
I for one welcome our slope-browed overlords.
[ add comment ] ( 6259 views ) | [ 0 trackbacks ] | permalink
I've wanted to write this paper for quite some time, but the material from different areas has failed to cohere on the previous occasions when I've tried to write it. Now I finally have a complete draft.
What SPECIES can teach us about THEORY
ABSTRACT: This paper argues against the common, often implicit view that theories are some specific kind of thing. Instead, I argue for theory concept pluralism: There are multiple distinct theory concepts which we legitimately use in different domains and for different purposes, and we should not expect this to change. The argument goes by analogy with species concept pluralism, a familiar position in philosophy of biology. I conclude by considering some consequences for philosophy of science if theory concept pluralism is correct.
[ 1 comment ] ( 9594 views ) | [ 0 trackbacks ] | permalink
I am teaching Poincar and Duhem in seminar this week. They are both so sensible that reading them elicits a twinge of despair at how little progress has been made in philosophy of science since. They were ahead of their time, of course, and there have been some real advances. This is not a post about despair-- or about the bits that were so forward-looking-- so I'll move on.
In Chapter VII of The Aim and Structure of Physical Theory, Duhem claims that the modern dynamical definitions of force and power are not the meanings we get from common sense. We observe that a wagon does not move when there is no horse harnessed to it. With a horse, it moves when the horse pulls. It stops when the horse stops. Thus, Duhem suggests, common sense gives us a quasi-Aristotelean notion of force according to which force is exerted at each moment in which a thing moves. Bodies in motion tend to stop absent exertion.
The example is quaint, since we 21st-century philosophers rarely if ever see horses and wagons. The scepticism about common sense is also different than it would be for us. The 19th-century had been awash with common sense philosophy that was a kind of know-nothingism: a way for educated people to feel comfortable with their dogmas, delivered by common sense in a cellophane wrapper of critical thought.
Nevertheless, Duhem is not entirely dismissive of common sense. He offers a rather nice metaphor of common sense as a trust fund: "The fund of common sense is not a treasure buried in the soil to which no coin can ever come to be added; it is the capital of an enormous and prodigiously active association formed by the union of human minds" (p. 261). As he develops the metaphor, he suggests that anything that is appealed to as an timeless axiom of common sense is rather a withdrawl of a discovery made previously.* If the common sense claim proves unworkable, it may be replaced by a more profitable investment.
That seems right, if we conceive common sense primarily as a source of general dicta like definitions of power-- not the sort of thing we would find perfected in theories that were suited to daily life. Yet Duhem insists that common sense is right in its description of how wagons and horses typically behave. So, he concludes, "observations of common sense are certain to the extent and degree to which they deficient in detail and precision" (p. 264). They suffice as observations but fail as laws.
Since I have an axe to grind, let's distinguish three ways of understanding common sense:
1. Common sense is a reservoir of general principles: that such-and-so is the nature of power, that our senses are reliable, that all things are the work of god, and so on. This is the 19th-century, know-nothing common sense. Duhem advocates falliblism (perhaps even scepticism) about such dogmatic claims.
2. Common sense is a reservoir of particular judgements: that there is a wagon over there, that the wagons we have seen have stopped when the horse rests, and so on. Duhem calls such claims "true and certain", which seems to overstate things.
3. Common sense is a way of forming beliefs: we trust our senses unless their are specific reasons to think we are deceived, we trust our memory, and so on. Duhem does not discuss this explicitly, but it is a natural way to understand the source of the particular judgments that he thinks of as "true and certain."
And the axe that I'm grinding: Thomas Reid, that great source of the common sense tradition, is read too often as advocating 1, sometimes as advocating 2 (eg, by VanCleve), and when read correctly as advocating 3.
* William James argues similarly that what is now common sense was once a discovery. Basic truths, James suggests, were the inventions of genius cave men. The Pragmatism lectures were only a few years after Duhem's discussion appeared as articles, so I wonder if James was familiar with Duhem.
[ add comment ] ( 11646 views ) | [ 0 trackbacks ] | permalink
Ron alerted me to the existence of Wikipedia Scanner, a service that does the reverse lookup to follow anonymous Wikipedia edits back to their source. As one might expect, it has turned up a number cases in which corporations actively manipulated their own entries. You can get details from Wired-- or almost anywhere else, for that matter. It's already been on Colbert, making this post about two weeks behind the wave.
As someone sceptical about Wikipedia, I feel like I should have something to say about Wiki Scanner. For example, I could point out that doing reverse lookup on IP addresses only works so long as MegaHugeConglomoco does their wiki hatchet work from the corporate office. If they have it done by their PR firm or by employees telecommuting from their homes, Wiki Scanner won't make the connection. And surely any company playing Wikipedia chameleon now will make specific efforts to disguise their involvement.
All of that means that Wiki Scanner cannot generate any reliable statistics about how many organizations are manipulating Wikipedia, how many entries are manipulated, or how many manipulations remain in the present Wikipedia corpus. It sifts out anecdotes of specific abuses. Although illustrative, these anecdotes are about what one would have expected. Of course, it would be fun if it turned up some singular surprises.
Addendum Sep7
Mark alerts me to Wikipedia trust coloring, software designed to indicate trustworthiness of stretches of text in the Wikipedia. It does so by effectively making the entry history manifest in a single page. The presupposition is that text that has been in the Wikipedia through more edits is more likely to be true, or at least that users are more reliable if text they write persists through more edits.
As Mark puts it:
Now we know, in addition to <some guy said so>, that <some guy said so without other guys bothering to say not-so>.No. I mean... yes.
Or better: to `some guy said so,' we add `no guy said no' (at least so said the log file); so we're better in the know.
No?
[ add comment ] ( 8186 views ) | [ 0 trackbacks ] | permalink
The New York Times Science section recently ran this item on Nick Bostrom's Simulation Argument. It is an odd article, because the Science section usually touts recent or upcoming research. Bostrom's paper touting the simulation argument was in Phil Quarterly in 2003 and had been circulating on the web for a couple of years before that. Moreover, Bostrom has been promoting it as something important for most of this century. He has a website at simulation-argument.com with a simulation argument FAQ and the proviso, "I regret that I cannot usually respond to individual queries about the argument. However, I try to repond to reporters."*
At its heart, the argument is a twist on the standard brain-in-a-vat argument for scepticism. The usual argument points out that there is nothing in our immediate experience of the world to prove that the experiences are not fed to us by a system simulating such a world. Bostrom adds a twist by imagining who the mad scientists running the simulation might be.
Suppose that the human race lasts long enough to be able to run simulations which include people like us. If it does, then descendants of ours might run many similar simulations. There would thus be one actual historical 2007 and many simulated 2007s in which we might be living. Put a uniform probability distribution over them, and you get the conclusion that we are probably not in the actual 2007 but instead in one of the simulations.
As Bostrom notes, the argument really gives you a dilemma: Either future humans will not run so many simulations (because they die out, never develop the capability, or decide not to do it) or we are probably in a simulation.
OK, but what does this add to the evil demon worries that have been with us since the seventeenth-century? Instead of the mere possibility that I might be a brain in a vat, it is supposed to yield the high (conditional) probability that I am a brain in a vat. Yet the probability assessment requires thinking about how the world works, which I must do as informed by what I know about the world.
Either we have an answer to the traditional worry or we do not. If we do not, then the new argument is redundant. So suppose we do have an answer to the traditional worry. There are two kinds of answers we might think we have:
First, we might accept a reliabilist premise that our natural faculties are a reliable guide to the truth. If we unflinchingly accept that premise, then we believe already that we are not in a simulation.**
Second, we might trust our natural faculties without an explicit premise that they deliver the truth. Once we accept that standard of evidence, my seeing the world is enough of a ground for me to believe in it. The simulation argument requires that trust to get started and so comes along too late to undercut it. To paraphrase Thomas Reid, starting with trust won't get you a sceptical conclusion.
Suppose, contrary to all that, that the argument leaves me mired in scepticism. I can imagine a great many creatures who might do simulations of creatures like us. I can also imagine creatures that would do simulations of creatures like them. Computational constraints don't put the brakes on this speculation, because powerful gods might want to simulate worlds more constrained than their own; perhaps the computational constraints we know are just be features of our world as simulated. There is no sensible way to put a probability distribution over these possibilities. In the Times article, Bostrom is quoted as saying: "My gut feeling, and it's nothing more than that, is that there's a 20 percent chance we're living in a computer simulation." I have no gut feeling on the subject, because I can't make sense of 'chance' here at all.
Apart from the merits of the argument, the story in the Times is a bit disconcerting. It just encourages the all too popular conception of philosophers as purveyors of headtrips and wacky sophisms. But wouldn't I return the call if they wanted to do a story on some wacky sophism of mine? Perhaps I could feign interest.
* The argument has also gotten attention from philosophers; see, inter alia Brian Weatherson's blogging on the subject.
** David Chalmers has argued that simulation is not a sceptical possibility, but simply an alternate metaphysics. If we are in a simulation, then everything we know about tables, chairs, dogs, ducks, and the rest of the world is true; it's just that those things are (considered fundamentally) part of simulation just as we are.
[ add comment ] ( 4993 views ) | [ 0 trackbacks ] | permalink
<<First <Back | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | Next> Last>>