'Words are curious things' redux 
Stijn writes a blog entry about the meaning of 'philosophy' and links to a sarcastic post that I wrote on the subject. I noticed the link, followed it back, saw that he quoted me, and wondered as to the context. Turning to Babelfish for a translation, I got the following:
If philosophy is the answer on the question what you study, the response is at the person asking the question generally of stupefaction. Followed by or "what is that for something?", or "what will do you with that?". In a joke this is as follows reflected: When I my grandmother answered told that I doctor in philosophy became, them: "terribly, but what kind of a sickness is philosophy actual?"

If people the word knows philosophy already, it is frequently in the meaning which is indicated by Van Dale online ones: relativising, contemplative. We have to that thank meaning probably to the philosophy of Epicurus and the stocijnen, with their philosophy as life wisdom.

Don't we don't have make urgently work of a more general term of what philosophy are? Or we must use the meaning also at academic level and, such as P.D. Magnus presents on its blog, our thesis conclude with the philosophical attitude:

"I mean, like, it sucks, basically, but it happened to me and I'm still alive."
Why, yes... all my base does belong to them. How nice of you to notice.

[ add comment ] ( 4138 views )   |  [ 0 trackbacks ]   |  permalink
The Revenge of the Dinosaur Argument 
I have commented on Philip Kitcher and scientific significance before, both here and in the d-cog paper. To briefly recap Philip's argument in in ch 6 of Science, Truth, and Democracy, he claims that science aims at finding true answers to significant questions. Questions can be significant for any of three reasons: (A) They relate to some more fundamental, antecedently significant question. (B) They relate to our projects and how we can best attain our goals. (C) They answer to natural human curiosity.

A crude pragmatist theory of truth, according to which an answer is true if it gets us what we want in the short term, would be enough to capture A and B. Kitcher wants truth in a stronger sense than this. Without C, this insistence loses its force.

Of course, all of these factors vary between individuals and between cultures. If two people disagree about which things are significant due to A, then we can trace it back to a disagreement about which questions are antecedently significant. If they disagree due to B, we can trace it back to a disagreement about which goals we should be pursuing. If they disagree due to C, however, there is nothing further to say. One person is curious and another is not. End of discussion.

Kitcher covers this over by calling it natural human curiosity. A total lack of curiosity would be a cognitive failure, I suppose, but someone can fail to be curious about specific things without thereby being an inhuman monster. If someone doesn't care about dinosaurs, there is nothing further to say to them.*

I cannot think of any way besides these three by which a question could be recognizable a significant one. Nevertheless, there are a great many lines of research which turn out in retrospect to be significant. Research programmes can lead in unexpected directions. Work that seems hifalutin now may yield spectacular applications down the road. This suggests another way in which questions can be significant: (D) Answers to them will lead to subsequent work that can be exploited to attain our goals, but we do not yet know which goals exactly or how.

We cannot be confident that a question is significant due to D in the way that we can with the other factors. It will inevitably involve a kind of hunch. Nevertheless, this is a standard defense of so-called pure research as opposed to applied research. The very possibility that there may be answers of this kind is enough to rebut crude pragmatism. Questions can have true answers-- or at least better or worse answers-- well before the answers have practical utility.

As Dewey often emphasized, and as Kitcher recognizes, our objectives may change over the course of enquiry. As such, there are further ways that questions may be significant: (B') They relate to projects we will pursue and how we can best attain goals we will have. (D') Answers to them will lead to subsequent work that can be exploited to attain goals we will have.

B' and D' are also matters which we cannot judge in advance, because we do not yet know what we will value. If we already did, then arguably we would already value it as a long-term objective.

Since Kitcher argues that science aims at significant knowledge, he is pressed to say that significance must be something we can recognize in advance of doing science. If we can't know it in advance, we can't aim at it. As such, although he recognizes that science can lead to unexpected developments and that our goals will change, he is not free to put D, B', and D' forward as distinct sources of scientific significance.


* In principle, there could be topics about which all humans are inclined to be curious. As an empirical matter, however, it just isn't so-- as I think Philip would concur. In a different context, he rejects the notion "that a yearning to satisfy curiosity is essential to being human" [p. 165]. (Although he does so without argument. The remainder of the paragraph seems to me to be a non sequitur.)

[ add comment ] ( 5264 views )   |  [ 0 trackbacks ]   |  permalink
Bacchanalian realism 
An old line of thought, resurfacing in recent rumination:

In asking whether categories are real, the problem is sometimes posed in this way: Is the world really objectively divided into real kinds of things, or is it just facts about us (our languages, our cultures, our interests, the ways our minds are set up) that make us tend to divide it in one way rather than in any other?*

The question presupposes a dichotomy between two positions:

(1) Facts about us do not influence which categories should appear in our science, because there is some relatively small number of real kinds. One job of science is to figure out what those kinds are. Call this position realist monism.

(2) The world could be divided up in many different ways. Our science should describe kinds that reflect what we want, because there is no privileged list of correct kinds. Call this position anti-realist pluralism.

From the way I labeled these positions, it should be clear that I do not think that these are the only two options. One can readily construct two others:

(3) Anti-realist monism might seem like a weird position, but it is not so far from the idealist tradition. According to Kant, the kinds in the empirical world do depend on how are minds are set up. Yet there is a relatively constrained set of objective kinds. Kant insists that these are empirically real but transcendentally ideal-- which might just be a way of saying that he is an anti-realist monist about kinds.

(4) Realist pluralism has been argued for by a number of recent philosophers, although not by that name. John Dupre has argued that there is no privileged biological taxonomy. The kind 'fish' in the old sense of finned sea life is as real and out in the world as the modern kind 'fish'. Biologists distinguish species in several different ways, all of them marking real distinctions in nature. Cooks and gardeners distinguish species in ways that may not be useful for biological purposes but which are no less real.

Philip Kitcher argues that the world contains too many real kinds to represent in our science. There really are more things on Heaven and Earth! So our interests lead us to select one set of categories rather than one of the infinitely many others that we might select. Yet the truths about these categories are not determined by what we want or by facts about us. Once we adopt one meaning of 'fish' rather than another, the world determines which sentences about fish are true.

Suppose accept some variety of realist pluralism. How pluralist should it be?

To take an example I've used in lecture: Let 'liz' mean the front end of a lizard, and let 'ard' mean the back end of a lizard. Surely there are lizards out in the world, and most of them have front ends and back ends. So there are front ends of lizards and back ends of lizards out in the world. But are the categories liz and ard real?

Dupre coined the phrase promiscuous realism. It suggests a pluralism that embraces all the kinds you could imagine, even stupid ones with no plausible usage-- kinds like liz and ard. In a conversation at the last PSA, however, he suggested that he had nothing quite so extreme in mind. He only meant to include different biological kinds, along with non-scientific kinds used by gardeners, cooks, and so on. All of those kinds are practically useful and can be used to formulate regularities that we care about.

Kitcher seems committed to the reality of liz. He proposes an object, the career of which consists of the last decade of Bertrand Russell and the first decade of his dog Bertie. There is no decisive objection against the reality of this Bert-cum-Bert, but it does not count as a creature in our taxonomy. Moreover, there is no reason to adopt a taxonomy that would distinguish this gerrymandered monster. That is just about the categories we have decided to recognize however, not about whether there really is such a thing in the world.

Kitcher does not give a name to this position, but it is more promiscuous than Dupre's promiscuous realism. This is unfortunate, because there is no natural term for something that is more than promiscuous. Orgiastic realism? Many other jokes suggest themselves, but I leave them as an exercise for the reader.


* This formulation is a direct quote from Robert M. Martin's chatty textbook Scientific Thinking, p. 215.

[ 1 comment ] ( 6162 views )   |  [ 0 trackbacks ]   |  permalink
D-cog in d-machine 
This is the last entry written in a Hungarian cafe. It's a revised version of the d-cog paper, and this aside that did not make the cut:

Some authors distinguish between collective cognition and distributed cognition, both of which are distinct from individual cognition. In individual cognition, the cognitive activity is done by one person and the representations are all contained in the mind of that person. In collective cognition, several people are involved and the task is not merely the sum of their individual cognitive tasks. In distributed cognition, the task is carried out by one or more people along with cognitive artifacts like chalk-boards, computers, and whatall else.

The distinction does not strike me as being important, and I do not think we need jargon to mark it. As I use the phrase, distributed cognition (d-cog) includes all the non-individual cognition: cases that involve artifacts and also those that involve multiple agents but no artifacts. Two reasons:

First, the insight of d-cog is that a system can perform a task that we think of as a cognitive task without any of the agents involved having the big-picture task in mind. This happens when there are material artifacts like scratch paper and hoeys, but it also happens when people unreflectively and collectively arrive at a solution to a problem. Calling the former distributed but the latter collective overlooks the important similarity.

Second, both kinds of cases involve representations outside of the agents' minds. When I calculate a large product on a chalkboard, the black-and-white representations are enduring. When a couple engages unreflectively in the kind of cueing that allows them to remember as a couple more than either might remember alone, they trade more transient but equally real representations. This phenomenon is known as 'transactive memory', and I think it may count as d-cog. In any performance that is not merely an aggregate of several peoples' individual cognition, some representations will be traded around.

In the paper, I cut through some of jargon wrangling by using `d-cog' rather than `distributed cognition' when I mean the thing that I am talking about in the sense that I have specified. If the locution catches on and comes to mean something more diffuse, perhaps I could distinguish just the thing I have in mind with the phrase distrizzibuted cognishizzle.

[ add comment ] ( 5781 views )   |  [ 0 trackbacks ]   |  permalink
Jon, Ron, and the wages of sin 
Another entry written in Cafe Isolabella, this one riffing on blog entries written by two of my colleagues. They seem related without actually talking about the same thing, so here is a bit of conceptual connect-the-dots.

Jon comments on a study finding that praying for patients seems to have no positive effect on outcomes. As commenters note, the study presumes that the effect of prayer to look for is an effect on the thing prayed about. This views prayer as a kind of divine technology, invoking God to convert the supplicant's faith into worldly outcomes. An alternate view of prayer would direct us to effects on the person praying. Commenters recount being taught the latter view in their Catholic upbringing, and it is the view of some moderate protestant denominations. Studies may observe an effect of this kind, but such an effect does not require a supernatural explanation. It is prayer as divine technology that would be spooky action-at-a-distance.

Ron comments on a study showing that atheists are the most distrusted minority in the US. The religious complaint about atheists is that-- because they don't believe in God-- they cannot comprehend the demands of ethics.

For some, I suppose, this might be a metaphysical conclusion: If there were no God, then nothing would be prohibited. Not everyone sees this as a reason to be religious; cf. Dostoyevsky, Nietzsche, Sartre. And the conclusion seems to presume the bankrupt divine command theory of morality; cf. Plato. I do not want to kvetch about the metaphysical argument, though, because I think that most people see it as a matter of moral psychology rather than metaphysics. The promise of an afterlife is supposed to motivate good behaviour and discourage bad behaviour. The wages of sin are death.

Mill comments that this vision of the afterlife is barbaric, noting the contrast with ancient Greeks who thought that one might want to do what was good even without threats and promises. Even so, it is easy to see how the vision is motivated. It is disheartening that virtue is not uniformly rewarded and vice uniformly punished in this world. An afterlife would remedy this obvious flaw in the actual world.

The divine technology view of prayer is even more barbaric. It forgets that vice often brings profit and virtue none at all, pretending instead that God will reward believers in this world and just in the way that they want to be rewarded. The divine technology view, were it true, would make an afterlife redundant. If the virtuous thought to ask, final judgment could be apportioned in media res.

Sadly, many of my compatriots believe in both. As the T-shirt says: Dear lord, please protect me from your followers.

[ 3 comments ] ( 7394 views )   |  [ 0 trackbacks ]   |  permalink

<<First <Back | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | Next> Last>>