The d and the cog in d-cog
Working on my distributed cognition paper, I have been thinking along these lines: We cannot treat the skin of an organism as the boundary of every cognitive activity in which the organism is involved; the boundaries of the cognitive system often must be drawn so as to include tools, parts of the environment, and other organisms. Two questions arise: (1) How far should the boundaries be pushed? (2) Why call the activity of these congeries 'cognitive'?
The answer to (1) will depend on the task we have in mind when we are describing the system. Consider doing long division with paper and pencil, a stock example of a d-cog activity. If we specify the task as long division, then the boundaries of the system need to include you plus the pencil and paper. We don't need to include the buckle of your belt, a nearby deep frier, or Olympus Mons. If we specify the task as doing a smaller division problem, writing down the outcome, carrying a digit, and so on, then the cognitive system just includes you.
Given the first specification, the task doesn't require there being paper at all; paper and pencil are part of the process that implements the task. Given the second, the task involves responding to and modifying the paper as part of the environment.
So, whether the process is distributed and how far depends on the task specification.
Ron Giere answers (2) by saying that 'cognitive' is just a term of art. D-cog might or might not be cognitive in an everyday sense, but it doesn't matter.
This only holds the problem at bay for a moment. We now need to ask what 'cognitive' means qua term of art.
I suggest in the paper that we can provide a rough and ready answer to this question in this way: Call an activity d-cog if (a) the task would count as cognitive if it were implemented in a single brain or mind and (b) the process that implements it actually extends beyond the boundary of a single organism. This allows us to leverage our ability to distinguish cognitive from non-cognitive tasks when considering individual cognition, extending the judgments to cover distributed cases which we might otherwise hesitate to call cognitive.
This handles long division, the examples offered by Ed Hutchins, and others besides. However, I am not sure that it will work in all cases. Nancy Nersessian and her collaborators have done extensive work on a specific research lab. The lab is studying blood vessels. She describes constructs and flow loops meant to simulate blood vessels. As she describes them, the constructs are 'mental models.' This language is partly just provocation, but she clearly thinks that the constructs are part of the cognitive system of the lab.
How should we specify the task that the lab is performing? Suppose we say that the task is learning about blood vessels. A single organism might pursue this task by constructing formal models and operating on them with its prodigious intellect. In so doing, the inquirer would learn about the models and-- if the models were sufficiently like real blood vessels-- learn about blood vessels as well. Certainly, this would be a cognitive task.
The scientists cannot do this, so they build physical models. They revise the physical models over time, much as the imagined inquirer would modify its formal models. And so the scientists learn about blood vessels.
Question (1) returns in this form: Does talking in this way make every experiment into part of the cognitive system that does the experiment? If so, then I think there is a problem. I want to say that many experiments are things we think about, rather than part of the system doing the thinking. Once we extend the cognitive system to include constructs and flow loops, how do we stop it from including everything?
I am not sure, but here is what I am thinking at the moment: The constructs are part of the system that implements the task of learning about blood vessels. Blood vessels are not part of that system, except in the trivial sense that the scientists themselves have blood vessels. Relative to this task specification, the scientists aren't thinking merely about the constructs. Rather, they are thinking about them qua models of blood vessels.
Suppose that a scientist is, on a given afternoon, working with a construct. If we characterize her task as learning about the construct, then we should not count the construct as part of the cognitive system. Since she relies on other instruments, then the process will still be distributed-- it just won't be distributed to the thing that she is trying to learn about.
I am tempted by this rule of thumb: If the task is learning about X, then don't include X as part of the process that implements the task.
It is only a rule of thumb, because it breaks down in cases of introspection. It also cannot clearly be applied to cases of mathematical inquiry.
The caveat about introspection has me worried that the rule of thumb is vacuous. We should only include X as part of the cognitive system if the cognition is introspective, but whether the cognition is introspective just depends on whether X is part of the cognitive system.
Tue 31 Jan 2006 07:32 PM