The d and the cog in d-cog 
Working on my distributed cognition paper, I have been thinking along these lines: We cannot treat the skin of an organism as the boundary of every cognitive activity in which the organism is involved; the boundaries of the cognitive system often must be drawn so as to include tools, parts of the environment, and other organisms. Two questions arise: (1) How far should the boundaries be pushed? (2) Why call the activity of these congeries 'cognitive'?

The answer to (1) will depend on the task we have in mind when we are describing the system. Consider doing long division with paper and pencil, a stock example of a d-cog activity. If we specify the task as long division, then the boundaries of the system need to include you plus the pencil and paper. We don't need to include the buckle of your belt, a nearby deep frier, or Olympus Mons. If we specify the task as doing a smaller division problem, writing down the outcome, carrying a digit, and so on, then the cognitive system just includes you.

Given the first specification, the task doesn't require there being paper at all; paper and pencil are part of the process that implements the task. Given the second, the task involves responding to and modifying the paper as part of the environment.

So, whether the process is distributed and how far depends on the task specification.

Ron Giere answers (2) by saying that 'cognitive' is just a term of art. D-cog might or might not be cognitive in an everyday sense, but it doesn't matter.

This only holds the problem at bay for a moment. We now need to ask what 'cognitive' means qua term of art.

I suggest in the paper that we can provide a rough and ready answer to this question in this way: Call an activity d-cog if (a) the task would count as cognitive if it were implemented in a single brain or mind and (b) the process that implements it actually extends beyond the boundary of a single organism. This allows us to leverage our ability to distinguish cognitive from non-cognitive tasks when considering individual cognition, extending the judgments to cover distributed cases which we might otherwise hesitate to call cognitive.

This handles long division, the examples offered by Ed Hutchins, and others besides. However, I am not sure that it will work in all cases. Nancy Nersessian and her collaborators have done extensive work on a specific research lab. The lab is studying blood vessels. She describes constructs and flow loops meant to simulate blood vessels. As she describes them, the constructs are 'mental models.' This language is partly just provocation, but she clearly thinks that the constructs are part of the cognitive system of the lab.

How should we specify the task that the lab is performing? Suppose we say that the task is learning about blood vessels. A single organism might pursue this task by constructing formal models and operating on them with its prodigious intellect. In so doing, the inquirer would learn about the models and-- if the models were sufficiently like real blood vessels-- learn about blood vessels as well. Certainly, this would be a cognitive task.

The scientists cannot do this, so they build physical models. They revise the physical models over time, much as the imagined inquirer would modify its formal models. And so the scientists learn about blood vessels.

Question (1) returns in this form: Does talking in this way make every experiment into part of the cognitive system that does the experiment? If so, then I think there is a problem. I want to say that many experiments are things we think about, rather than part of the system doing the thinking. Once we extend the cognitive system to include constructs and flow loops, how do we stop it from including everything?

I am not sure, but here is what I am thinking at the moment: The constructs are part of the system that implements the task of learning about blood vessels. Blood vessels are not part of that system, except in the trivial sense that the scientists themselves have blood vessels. Relative to this task specification, the scientists aren't thinking merely about the constructs. Rather, they are thinking about them qua models of blood vessels.

Suppose that a scientist is, on a given afternoon, working with a construct. If we characterize her task as learning about the construct, then we should not count the construct as part of the cognitive system. Since she relies on other instruments, then the process will still be distributed-- it just won't be distributed to the thing that she is trying to learn about.

I am tempted by this rule of thumb: If the task is learning about X, then don't include X as part of the process that implements the task.

It is only a rule of thumb, because it breaks down in cases of introspection. It also cannot clearly be applied to cases of mathematical inquiry.

The caveat about introspection has me worried that the rule of thumb is vacuous. We should only include X as part of the cognitive system if the cognition is introspective, but whether the cognition is introspective just depends on whether X is part of the cognitive system.


[ add comment ] ( 4767 views )   |  [ 0 trackbacks ]   |  permalink
Further reverberations in the echo chamber 
The mononymous Helmut blogs about my discussion of the wikipedia. He writes: "Ideally, other readers engage in a collective re-editing of each entry, and I like that ideal as a kind of Peircean community of inquirers." As he notes, the ideal, Peircean community doesn't include just anyone. It is open to anybody doing science, but they have to be doing science. People relying primarily on methods of tenacity or authority don't count.

There are serious criticisms of Peirce's claim that the scientific community will eventually come up with the truth. Browsing through recent issues of the Transactions, I can point to a solid paper by Ilya Farber [PDF] and another by Robert Meyers-- and that is only counting the papers authored by friends of mine. It is rarely noted, however, that his claim that the community' opinion will converge on the truth is only about the community for contingent reasons. Scientists need to work together because each human scientist is finite: not enough attention, not enough time. If there were a single inquirer with time and resources enough, then she could converge on the truth as well as an arbitrarily large community.

In this respect, Peirce thinks of scientific methods as definable in terms of a single individual. A scientific community is one in which each member considered individually employs those methods. Contrawise, real epistemic communities are as much defined by the structure of their social networks as by the individuals considered each in isolation.

The issue arises with respect to the wikipedia: Does the structure allow people who do know more to correct for people who know less, or does error swamp wisdom?

There is certainly something that touches on these issues in Peirce's corpus, but I'll leave the archival work as an exercise for the reader.

[ 4 comments ] ( 15170 views )   |  [ 0 trackbacks ]   |  permalink
Gossiping in the echo chamber 
More ruminations about the reliability of the wikipedia; cf. my earlier post Reliability on Wikipedia.

Meandering off-task this morning, I was browsing the wikipedia entry for Aldous Huxley. It claims that he wrote the original screenplay for Disney's Alice in Wonderland. The entry for Alice does not corroborate this, so I searched more broadly.'s encyclopedia makes a similar claim. Another website describes it as an uncredited contribution.'s encyclopedia is covered by the GNU free document license. It is, for all appearances, a cut-and-paste from the wikipedia. So it repeats rather than corroborates.

The wikipedia seems to serve as a relay in this way: Someone, call them Alpha, says X on their webpage. Alpha or someone who has read Alpha's webpage writes X into a wikipedia entry. Other people read it and say X on the websites or in on-line discussions. Because the wikipedia is more often consulted than particular websites, this amplifies the usual echo chamber effect. Wikipedia also has an air of comprehensiveness and ubiquity that makes people less likely to acknowledge it specifically.

In my jargon, this makes sampling a less effective method than it would otherwise be.

Since this is simply a matter of curiosity for me, I could easily have accepted this without much scrutiny. If I had added it to my stock of beliefs, I could easily have done so without remembering where I had read it. If I recalled it later in some other context, I might rely on it because I believed it.

The worry about Huxley and Alice is just that the wikipedia can amplify ignorance or carelessness. Greater concerns arise when people start deliberately manipulating entries for their own ends. The defamation of John Seigenthaler seems to have been a practical joke, but more insidious manipulations are possible. Congress seems to be in on the act; congressional staffers are manipulating the entries on their bosses and their boss' adversaries [via ShortWoman].

UPDATE: Patrick Barkham has a clever piece in the Guardian about political spin of wikipedia entries.

[ 1 comment ] ( 7117 views )   |  [ 0 trackbacks ]   |  permalink
Significance in the 20th century 
Working on the d-cog paper and teaching Understanding Science again have got me ruminating on scientific significance.

In The Advancement of Science, Philip Kitcher first advocated the view that science aims not at truth but at significant truth. At the time, he treated significance as an objective feature of some truths. To set up what I say below, here is an excerpt from a paper I wrote in Spring '97:

* * * begin flashback * * *

Kitcher sees science as aiming to adopt significant truths. Traditional attempts to understand scientific significance in terms of systems of universal generalizations have led to problematic schemes to measure truth content. So, he writes:
My approach circumvents these difficulties by offering a quite different view of scientific significance. A significant statement is a potential answer to a significant question. What we strive for, when we can get them, are true answers to significant questions. [p.118]
The explanatory schemata of the concensus practice suggest questions of intrinsic significance, and many other questions derive significance as a step toward answering these primary questions. Significance, then, is a shared sense that points the scientific community to investigate different things. When a significant truth is discovered, it is taken up into concensus practice.
Although Kitcher assumes that significance is uniform across the community, it's not clear why that should be so. A field biologist might find considerable significance in the migratory habits of birds, but little or none in molecular genetics. Conversely, another biologist might find great significance in genetics but none in migrations. This is not simply a matter of caprice-- suppose the first biologist's work relies on her using the best available information regarding migration, but nothing relies on what she think's about the bird's genome. Even if researchers differed over the significance of certain truths, we would like poeple who apply scientific discoveries to be working with the best that science has to offer. Although it may not be critical for the discovery of further significant truths that a clinician use the best medical knowledge we have, it may be necessary for the survival of his patients. So, significance is tied to practical concerns of two kinds: (i) the discovery of further significant truths and (ii) the achievement of certain technological goals.

It follows immediately that the sectors of the scientific community which ought to hold the community's best candidates for truth are those for whom that truth would be a significant truth. Consider, for example, information concerning cancer. Doctors treating cancer patients should clearly employ those beliefs most likely to heal their patients. Given Kitcher's realism, this is just to say they should employ those beliefs most likely to be true. Cancer researchers, in order to discover new truths, ought to begin by employing background knowledge that is likely to be true. Other members of the scientific community (other doctors, marine biologists, theoretical physicists) needn't believe the community's best candidates for the truth about cancer at all, unless doing so is required to have oncologists believe it. So, the aim of science need not be conceived as adopting significant truth into consensus practice or even as spreading particular truths as widely as possible. Instead, all that matters is that truths are held by the people who actually need them; that is, people for whom they are significant. If the class of people who found a question significant were small, then the aim of science would be consistent with the majority of scientists believing anything whatsoever about it. Many of them may well cling to a falsehood, but why should this matter if it's not a significant falsehood for them?
The notion of significance developed above is, in a sense, recursive. ... Clause (i), by referring to significant truths, forces us back into the definition. Only facts used to achieve technical objectives are significant in themselves. Clause (ii) is just about technological proweress. Does truth then just reduce to the ability to perform technical feats? The concern is that proper deference to the significance of significant truths makes them all significance and no truth. How might such bald pragmatism be softened?

* * * end flashback * * *

In Science, Truth, and Democracy, Philip changed his view and argued that significance depends on what we care about. He also offered an answer to the objection: Significance is not necessarily practical. There are some questions we are motivated to ask just on the basis of natural human curiosity.

The appeal to natural human curiosity always strikes me as thin and philosophically unsatisfying. Nevertheless, I feel the visceral appeal of it. It is simply cool to learn about dinosaurs, for instance, and that coolness lends some significance to paleontological research. It is not simply that constructing big skeletons in museums is cool, because we could more easily construct fictional but impressive dragon skeletons. The coolness of dinosaurs is due, in part, to the fact that they really did exist and that our account of them-- although not true in all its details-- is based on the best evidence available.

Call this the dinosaur argument against bald pragmatism.

[ add comment ] ( 5695 views )   |  [ 0 trackbacks ]   |  permalink
File under 'words are curious things' 
I am aware that the words 'philosophy' and 'philosophical' are commonly employed in ways that have nothing to do with academic philosophy, but a story in today's the NY Times seemed obviously wrong to me. The story by Denise Grady is about a GI who suffered crippling injuries in Iraq. She writes:
Corporal Poole is philosophical. "Even when I do get low it's just for 5 or 10 minutes," he said. "I'm just a happy guy. I mean, like, it sucks, basically, but it happened to me and I'm still alive."

It turns out that this usage is perfectly kosher. One on-line dictionary offers 'meeting trouble with level-headed detachment' as a second definition for 'philosophical.'

For any readers who are completing a dissertation in philosophy at this time, I suggest this as an epigram: "I mean, like, it sucks, basically, but it happened to me and I'm still alive."

[ add comment ] ( 4720 views )   |  [ 0 trackbacks ]   |  permalink

<<First <Back | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | Next> Last>>