I admit that science often involves a relatively detailed theoretical framework developing in dialogue with empirical work. However, Paul overstates the case when he uses this as a demarcation criterion. I want to argue that lacking a rich explanatory framework does not make a discipline ipso facto pseudoscientific.
Imagine that parapsychologists had discovered robust correlations between (say) the thoughts of nearby bald men and the vibrations of pink quartz crystals. Suppose further that these regularities allowed the construction of reliable telepathic lie detectors. The enquiry would certainly count as scientific, even if parapsychologists had no explanation for these regularities.
As Paul notes, parapsychology has not generated any robust, reproducible results like this. That is damning for parapsychology. My point is merely that empirical success alone is enough to sustain a scientific research program at least for a while, and so the Churchand/Feyerabend criterion is not satisfactory as a demarcation criterion.
The example of gestalt psychology is instructive in this regard. The gestalt psychologists discovered interesting phenomena. Cataloging and organizing these phenomena sustained a legitimately scientific research program for many years. Eventually, the research program stopped generating new results and became degenerate. The gestalt folks were not able to give deeper explanations for the phenomena it had discovered. When they tried, they navely extrapolated from phenomenal structure to brain structure. The result was a bad theory. I learned from Lakatos that a degenerate research program doesn't become non-science, it just becomes bad science that ought to be abandoned.
One might still try to defend a more conservative version of the demarcation criterion: A discipline is unscientific if all it does is find anomalies for an existing research program.
I still think this says too much. I agree that you can't have a distinct scientific research program just by tabulating anomalies for an existing research program. Since any research program will face some anomalies, then tabulating anomalies might not even be interesting scientific work.
In the case of parapsychology, the damning thing is that there are no systematic anomalies for a materialist approach. Unlike gestalt psychology, parapsychology has not discovered any robust phenomenal laws. We might say that parapsychology is a non-science because of that, but we might instead say that it is just a really terrible science.
[ add comment ] ( 7770 views ) | [ 0 trackbacks ] | permalink
A stray thought that didn't make it into the induction paper:
In John Worrall's 2000 BJPS article, he writes:
Recognising that some proposition is indeed a theorem of some axiomatic system is clearly an outstandingly creative act... But what else can a great mathematician be doing when recognizing that proposition P is a theorem, but somehow-- and clearly in large part subconsciously-- going through some mental process that amounts to the construction of a sketch-proof for P? [fn. 13]
Is there anything that indicates the shift from argument to bold assertion more clearly than a rhetorical question?
A mathematician, in situ, might arrive at a conclusion in any number of ways. The public defense of that conclusion requires that it meet the muster of public standards. It is important not to get confused and think that the private process must already mirror the public debate that follows.
Pattern recognition-- as a psychological matter-- is perceptual rather than inferential. Mathematicians are trained to recognize theorems. Good ones can recognize that something is a theorem on sight, without even thinking out the sketch of a proof. For most theorems identified in this way, they can provide a proof-- but that is a separate matter. It seems natural enough to think that great mathematicians might recognize in the same intuitive way that some novel, thrilling P is a theorem even when they are unable to give a proof of P. There doesn't need to be a sub-conscious sketch proof lurking in the recesses of their brain.
(This is some support for the discovery/justification distinction, even though it is now fashionable to diss on that distinction.)
[ add comment ] ( 10587 views ) | [ 0 trackbacks ] | permalink
I had the idea for this paper several years ago, but the pieces only clicked into place recently. It has reached the whole-draft stage, so I'm posting a copy.
Eliminating induction
According to some accounts, however, scientific inference is deductive: Apparently ampliative inferences are really deductive inferences with suppressed premises. Norton dubs these `material theories of induction.' They represent one approach to reconstructing scientific inference. This paper argues from general considerations about inference to show that there is no logical reason to prefer material theories over other reconstructions. The consequences for material theories of induction depend on what they are meant to do: They may succeed as descriptive accounts, and they may provide sound, practical advice, but they cannot ground the justification of scientific claims any more firmly than non-material theories.
[ add comment ] ( 4272 views ) | [ 0 trackbacks ] | permalink
Reading Peter Winch's The Idea of a Social Science (1958), I was surprised by the following passage:
The accepted view runs, I think, roughly as follows. Any intellectual discipline may, at one time or another, run into philosophical difficulties, which often herald a revolution in the fundamental theories ... Those difficulties [bear] many of the characteristics which one associates with philosophical puzzlement and they [are] notably different from the technical theoretical problems which are solved in the normal process of advancing scientific enquiry. [pp. 42-3]
Winch gives the example of Einstein's development of relativity.
Later in the book, he contrasts the discovery of a new germ ("a discovery within the existing framework of ideas") with the development of germ theory itself. The latter involves "not merely a new factual discovery within an existing way of looking at things, but a completely new way of looking at the whole problem of the causation of diseases, the adoption of new diagnostic techniques, the asking of new kinds of questions about illness, and so on" [pp. 122-3]. This is the Kuhnian contrast between normal science (in which work goes on inside a theoretical framework) and revolutionary science (in which a new framework is introduced.) The only thing Winch lacks is a nice term like paradigm with which to describe the whole matrix introduced by the germ theory.
It is typical for philosophers to treat Kuhn's The Structure of Scientific Revolutions as a watershed, anticipated only in the work of N.R. Hanson. This was the way I was taught in science studies courses, without even the passing reference to Hanson. The Edinburgh School (for example) is treated as a post-Kuhnian development, a rivulet running from the Kuhnian watershed.
Not only does Winch have the distinction between normal and revolutionary science four years before Kuhn, but he considers that distinction not to be such a big deal. It is, he says, "the accepted view."
Barnes, Bloor, Shapin, and the rest of the Edinburgh crowd were post-Kuhnian in the sense of writing after Kuhn, of course, but their approach to science studies belongs to a tradition that predates Kuhn. Their use of Wittgenstein follows in the footsteps of Winch's, and is not merely a theoretical framework used to cash out Kuhnian insights.
This might be obvious to anyone who lived through more of the history than I have, but it is not something that I could glean from philosophy of science as it was taught to me. Just as science students are taught cleaned up, textbook science, I was taught cleaned up science studies in which Kuhn was the hero.
[ 2 comments ] ( 6325 views ) | [ 0 trackbacks ] | permalink
I've been thinking about Roger White's essay `Epistemic Permissiveness' (available on his website), and I have an argument that I want to try out.
Permissive cases, in White's jargon, are ones in which it would be possible for two agents with the same evidence and background knowledge to disagree about the matter at hand; ie, in which it is compatible with rationality to believe P and equally compatible with rationality to believe not-P instead. Epistemic permissiveness is the doctrine that there are some permissive cases.
White's arguments aim to uncover a kind of deliberative irrationality in permissive cases. Consider a schematic example: Before I collect evidence about some contingent matter P, I do not have a believe that P or that not-P. I make some observations, consult some experts, and so on. The evidence that I collect leads me to believe P. If I know that this case is a permissive case, however, I know that I might rationally have come to believe not-P on the basis of the same evidence. Whether I believe P or not-P depends on the way in which I decide to be rational, not on the force of the evidence. If the difference between believing P and believing not-P just depends on choice or contingency in this way, I might as well have decided which to believe before collecting any evidence.
I have rendered the argument in a ham-handed way, making it look too much like the problem in my Peirce paper. However, I think that what I say below goes through even for White's more subtle formulation of the problem.
White admits that the deliberative incoherence evaporates if any agent cannot judge-- except perhaps retrospectively-- that a situation they are in is a permissive case. It would then be impossible for my belief that P to be undermined by rumination about the fact that I might rationally have believed not-P instead. He dismisses this approach:
...while this position may be coherent and escape the objections thus far, I doubt that anyone holds such a view, as it is hard to see what could motivate it. (p. 10)
Many in philosophy of science have been tempted to say that rationality is a feature of epistemic communities and not of isolated individuals. I will sketch a mild version of this claim, as advanced by Philip Kitcher in the 90s and Helen Longino in her more staid moments; I do not need the more revolutionary versions advocated by Lynn Hankinson-Nelson, Kitcher after the millenium, and Longino in her wilder moments.
In discovering what the world is like, there are pressures to reason in different ways. Some discoveries would never be made if we did not leap to bold new hypotheses, but sometimes leaping would lead us down blind-alleys and into theoretical box canyons. The rational thing to do is to spread the epistemic risk: Have some scientists pursue wild theories while others defend orthodoxy. Promising new leads will be followed up by someone, and the community will follow along only once a critical mass of evidence has been gathered in their favor. Call this the collective strategy for scientific development.
A further fact about human agents is that we are better at exploring new theories if we believe they might be true and we are better at defending orthodoxy if we believe that the challenging view is false. This means that the collective strategy requires some people (the pioneers) to believe P and others (the old guard) to believe not-P, even when confronted by the same evidence and arguments. The collective strategy yields permissive cases.
Permissive cases occur only around legitimate scientific controversies, so they are not ubiquitous. Moreover, it will only be clear in retrospect whether the pioneers were heading to a new frontier or down a dead end. Deliberation at the time cannot be undermined by considering that this is a permissive case. This view seems to be the kind of view that White considers coherent but unoccupied. And someone holds such a view-- namely me, at least some of the time.
[ add comment ] ( 7048 views ) | [ 0 trackbacks ] | permalink
<<First <Back | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | Next> Last>>