Cited more often than the norm 
Justin at Daily Nous quotes the statistic that "82 per cent of articles published in humanities are not even cited once." Turning this around, only 18% are cited.

I was curious about how my own papers fared in this regard. Starting with data from Google Scholar and correcting some, 68% of my publications have been cited. One of the corrections was to dismiss articles which were only cited by me in another article. Counting self-citations, the rate jumps to 78%.

In a more self-serving mood, but the quality of my work is only one factor here.

Another factor is that all of my papers are readily available on-line. Once there's a draft worth sharing, I post it to my website. I update it with my final draft once it's accepted for publication, and I continue to make it available. The result is that people who are puttering around on a topic are likely to come across my work, and then they can cite me. This is certainly how forall x, my open-access logic textbook, has come to be cited 11 times. And I have some conference papers and working drafts which have been cited even though they've never been available anywhere but on my website.

In discussions of whether to post papers on-line or not, people underrate the advantages. People who notice my work because it's on-line almost never tell me about, but sometimes they do cite me.

[ add comment ] ( 2201 views )   |  [ 0 trackbacks ]   |  permalink
Counting journals that count 
There is administrative pressure, for purposes of tenure and promotion, to list the top philosophy journals. Some disciplines have lists which are approved by their professional organizations. So people from outside the department are sometimes incredulous that philosophy doesn't.

But we don't. Letter writers for T&P cases tend to say this but then waive their hands at the relative status of various journals, possibly linking to a blog post or something.

I have recently thought that it would be good to have a general statement about this for administrative purposes, rather than putting something together on an ad hoc basis for every case. Below the fold is a draft of what a general statement might look like. It's an alpha version, and I am not confidently committed to other the contents or the form of presentation. So quoting it without that proviso is likely to earn my wrath. Talking about it as a possibly half-baked blog post is fine, though, and I'd appreciate feedback.
Read More...

[ add comment ] ( 3669 views )   |  [ 0 trackbacks ]   |  permalink
Cleaning Chekhov's Gun 
I wrote most of this years ago, and I stumbled across the file recently while working on something else. I'm sticking it here, like you do.
Read More...

[ add comment ] ( 1344 views )   |  [ 0 trackbacks ]   |  permalink
Struck by a semirealism 
In a number of recent articles, Bence Nanay has argued for singularist semirealism. It's an anti-realist view about natural kinds which holds that particular tokens of properties exist with various degrees of similarity and dissimilarity among them, but that there are not any natural property types. The view is similar to Anjan Chakravartty's semirealism, which holds similarly that the world consists of property instances more or less sociable with one another, and that the clusters of sociability which science picks out are not somehow special in nature.

Nanay writes:
Some pairs of property-tokens are closer together in the property-space; they resemble each other more than others. But property-types are our arbitrary ways of delineating regions of this property-space. The property-space does not have joints: it consists of lots of property-tokens, some close together, some further away from each other. (2013, p. 377)

His approach seems to be more deeply metaphysical than mine. Nanay is most centrally concerned with whether a natural kind is a thing in the world that exists. I am concerned centrally with the extent to which the world constrains scientific categorization. I am happy to say that categories which uniquely allow successful science would be natural kinds regardless of whether there is an entity the deep ontology of the world which corresponds to that category. I am also willing to allow that kinds can be more or less natural, to the degree that the world condemns alternative taxonomies to failure.

Nevertheless, Nanay argues that singularist semirealism coheres with scientific practice. The reason is "that the two main tools of actual scientific practice, experimentation and measurement, are practices involving property-tokens and not property-types" (2011, p. 189; 2013, p. 383). This seems wrong to me for at least two reasons.

First: If a scientist were given a table of data which was just numbers or magnitudes, she'd have no use for it. Measurements necessarily have units. So measuring the masses and lengths of 10 samples necessarily requires measuring the masses as masses and the lengths as distances. Each singular property property must fit into a category scheme, and so measurement is impossible without kinds.

Second: It ignores the distinction between what Bogen and Woodward call data and phenomena. Singular measurements are data which are always subject to error and variation. Although data play an important evidential role, scientists don't primarily care about data. They care about phenomena which data instantiate. The phenomena are the curves or patterns which we think the data would trace out if it weren't for noise and error. When scientists repeat an experiment, they do not expect to produce precisely the same data as earlier experiments. Rather, they expect to get data which (once reduced by standard formal methods) will yield the same phenomena. So measurement and experiment are about general phenomena-types rather than singular data-tokens.

References

Bogen and Woodward 1988. Saving the Phenomena. PhilRev, 97(3): 303-352.

Chakravartty 2007. A Metaphysics for Scientific Realism, Cambridge University Press.

Nanay 2011. What If Reality Has No Architecture? The Monist, 94(2): 181-197.

Nanay 2013. Singularist Semirealism. BritJPhilSci. 64: 371-394.

[ 2 comments ] ( 9602 views )   |  [ 0 trackbacks ]   |  permalink
Types and tokens of blue 
Yesterday I learned about recent work by jazz combo Mostly Other People Do the Killing. Their album "Blue" is a note-for-note remake of "Kind of Blue". They transcribed all of the solos and performed them with meticulous care so as to produce a recorded album that replicates, as much as they could, the sound of the original.

The exercise has philosophical implications, and they know it. There are echoes of Pierre Menard's Quixote, which they foreground by using the Borges short story as their liner notes. Menard's goal, however, was not to copy but to put himself in a state of mind where he would write words that coincided with Cervantes' original. The parallel exercise would be if the band had tried to live their lives in a way which led them to improvise just the same notes which Miles Davis, Cannonball Adderley, John Coltrane, and the rest improvised back in the 1950s. That exercise would not have produced this album, because that exercise would not have led to something which sounds so precisely like "Kind of Blue".

So it's important that the band transcribed the solos, recorded tracks separately, and acted so carefully so as to preserve information from the original performances. One natural reaction is that such slavish emulation isn't jazz. Moppa Elliott (bassist for the band) discusses this point in an interview about the project. He asks, "Is what we did even jazz? If it isn’t, what does that make it? If it’s not jazz, why not?"

I've now read a bunch of reviews of the album. Perhaps the best is Bruce Lindsay's deadpan paean. It's odd that nobody refers to "Blue" as a cover of "Kind of Blue". Part of this is because 'cover' is a category in rock music, not jazz. Rock and jazz have different versioning practices. But there's a familiar variety of cover where musicians attempt to play a song so that it sounds precisely like a canonical version of that song. In our terms, this is a mimic cover.

The similarity to a mimic cover makes it odd when Marc Meyers in the Wall Street Journal review speculates that, "If 'Blue' is even moderately successful, jazz, rock and soul musicians may be motivated to clone other pivotal works like the Beatles' 'Rubber Soul,'..." Beatles covers and cover bands are already a thing.

In the paper where we introduce the phrase, Christy, Cristyn, and I argue that mimic covers are properly evaluated in terms of their fidelity to the original. I'm not sure whether that's the case with "Blue". Elliott suggests that the point is the opposite, to get people to listen to the original with an attentiveness to precisely those features which couldn't be or at least weren't faithfully reproduced in the cover.

However, because it is a transcription and performance by skilled musicians, "Blue" preserves information about the original (in a technical sense of 'information'). So one gets a kind of access to the original by listening to the new album. Imagine civilization collapses, all copies of the original Miles Davis album are lost, but somehow a copy of "Blue" survives. Certainly the jazz techno-priests in that dystopian future would listen to the album as a way to appreciate the way Davis and his band played, not the way Elliott and his band played. The performances, as repeatable interpretation types, are preserved in this meticulous homage.

In our less counter-factual dystopia, however, we have recordings of "Kind of Blue" to listen to alongside recordings of "Blue". The new album is like one half philosophical thought experiment, one half virtuosic accomplishment, and one half redundancy.

[ 1 comment ] ( 8918 views )   |  [ 0 trackbacks ]   |  permalink

<<First <Back | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | Next> Last>>