Reading the Meditations again, I began to wonder how different it would be if it had been written as a dialogue instead. The place of the reader would then sometimes be given in the second person:
Surely [says Socrates, Philonous, or whoever is the dialogue's voice of wisdom] you must admit that there is no sign by which you can tell you are not dreaming this whole conversation.
Struth! [replies the dialogue's rube] Indeed it is so. The fact that I am speaking as if in stage play suggests that it may truly be a dream.
I am not sure if any philosophical substance would be lost in this alternate-universe version of the Meditations. The text would be somewhat longer, however, because the dialogue would include pleasantries among its characters at the start of each Meditation:
I have done as you suggested and imagined a demon hell-bent on deceiving me. Quite frankly, it made for rather a grim breakfast. Perhaps today you can convince me of the existence of bacon.
You are too hasty. The probable existence of bacon is still days away.
[ add comment ] ( 4180 views ) | [ 0 trackbacks ] | permalink
The NY Times has a story about Caveon, a firm that uses forensic methods to identify students who are cheating on standardized tests. There are reasons to be dubious both about standardized testing and about automated cheater detection. Cases can be made for both, but the deep problem is that they inevitably have an aura of faux precision. Two examples:
1. Cheating is on the rise all over the place, the story says. According to a state functionary, since Caveon "began working for Mississippi in 2006, cheating has declined about 70 percent." The problem, of course, is that there is no independent measure for how much cheating there is. If there were, then that independent measure could be used to identify cheaters without paying Caveon.
2. The criteria used to identify cheaters all seem sensible enough: too many correlated errors, too much variance in performance from section to section, lots of erasing (which allegedly is a sign that someone later cleaned up the answers), doing well on hard questions but poorly on easy ones (although this might occur because the student is overconfident or bored with easy questions), and so on. Even though some of them are only suggestive, such factors can combine to make a convincing case for cheating.
Caveon's actual algorithm is proprietary, but the article says that it calculates the probability that the particular array of factors might occur by chance. We are told, "When the anomalies are highly unlikely - their random occurrence, for example, is greater than one in one million - Caveon flags the tests for further investigation by school administrators." The deep problem here is that there is no natural probability model for the non-cheating test taker, but the precision of 1-in-a-million only makes sense given some defined probability model. For example, students of some backgrounds might have a hard time with some so-called 'easy' questions or a easy time with some 'hard' ones. The probability that they would do something that looks like an anomaly would be pretty high; higher, anyway, than the probability that would result from rolling dice to fill in bubble sheets.
I sympathize with Walter M. Haney, who is quoted in the article complaining that Caveon's methods haven't been published and so aren't open to scrutiny. As he says, "You just don't know the accuracy of the methods and the extent they may yield false positives or false negatives." The CEO of Caveon replies that "the company had not published its methods because it was too busy serving clients. But the company's chief statistician is available to explain Caveon's algorithms to any client who is curious." This doesn't seem like enough. The people who are qualified to evaluate the reliability of Caveon's algorithm's are experts in statistics and education testing, not the clients of Caveon who are administering tests.
Of course, people generally tend to be dazzled by mock precision. So part of the blame might go to the Times' reporting rather than to Caveon. Yet Caveon profits from this general glow that surrounds quantitative measures, and the persuasive power of secret algorithms makes Haney's criticism all the stronger.
[ add comment ] ( 5265 views ) | [ 0 trackbacks ] | permalink
I was gifted a copy of Logicomix: An Epic Search for Truth, started reading it last night, and - after waking up in the middle of the night - finished it. It is engaging, and I enjoyed it.
Perhaps, as someone who teaches logic, I should have something to say about the book as an exploration of the limits of logic. I don't. I'll just make a comment about rhetoric.
The book begins with the authors meeting, wandering around Athens, and talking about the story that forms the core of the book. Within that, there is a narrative about Bertrand Russell giving a speech at the onset of World War II. During his speech, Russell talks about his own life. These flashbacks about Russell's life and about developments in mathematical logic are the actual core narrative. The authors touch up history a bit, having Russell actually meet all of the major logicians.
At various times during the book, the authors break out of the nested narrative about Russell and return to themselves in Athens. Their wrestling with what the story means is a framing device, and after the Russell narrative ends the creative team all attends the dress rehearsal for a play.
Several reviews call this "clever framing", and the creators come across as charming. One of them has a dog who is taken for walks at various times and who provides visual interest in the background of other scenes.
This kind of self-referential inclusion of the artists has become a standard thing for non-fiction comics. The canonical case, I guess, is Art Spiegelman's Maus. There, it is indispensible to the story. The historical part is about Spiegelman's own father, and the parts about the author are about his struggle to come to grips with his father's story. His inclusion in the narrative is not just a device, but instead is an important aspect of the story.
A more recent example is Bryan Talbot's entertaining Alice in Sunderland. This book lacks a central narrative. Instead, it follows Talbot's ruminations about the English city of Sunderland, the history of England, and Lewis Carroll. Various vignettes are presented in different artistic styles, and in some ways the book becomes about graphic style; Talbot as illustrator is an aspect of that part of the story. On a less abstract level, it also discusses how he came to Sunderland and came to be writing the book.
Further examples are provided by the various -ing Comics books by Scott McCloud: Understanding Comics, Reinventing Comics, and Making Comics. In them, McCloud himself appears dressed in a Zot t-shirt. Although some of what McCloud says is first-person reflection, mostly he is talking about the medium of comics. The McCloud avatar on the page provides visual interest. We don't have to watch him walk the dog or engage in activities that reach beyond the central discussion and into daily life. (Scott McCloud wrote documentation in the same style for the release of Google Chrome. I found the McCloud avatar was a distraction there. It doesn't fit for him to be the narrator about a new web browser in the way it makes sense for him to narrate about comics.)
In all of these examples, the authors are actually part of the story; they play a role in it, and so it makes sense for them to appear. And their appearance is largely limited to that role.
The authors' intrusions into Logicomix don't seem as well motivated. At one point, one of the contributors has his cell phone stolen while he is wandering around the neighborhood where he grew up. He later sends an e-mail to the author in which he suggests that logicians were like mapmakers. They went too far when they confused the map (formal logic) with the world (reality). This analogy, or something like it, recurs in the core narrative when Wittgenstein writes the Tractatus. Nonetheless, I don't think the episode of the stolen cell phone really adds anything.
Throughout, that contributor is pretty passionate about how the story ought to be told - but we never learn why. He is a computer science professor, the others are artists, and perhaps that is supposed to be enough. The people kvetch about how the story should be told or what it means, but we never learn why any of these people care about telling it. In Logicomix, the compsci prof is brought in as a consultant when part of the story has already been written. The authors are just stipulated to be the people telling this story. They have no connection to it. The framing narrative, although visually interesting and pleasant enough, doesn't really add to the story.
Note that I'm not asking for much. In Alice in Sunderland, Talbot is simply enthusiastic about his adopted home town. We don't even get that much in Logicomix.
So I don't think that the self-referential device is "clever framing". A self-referential framing device is now simply a standard thing for non-fiction comics, like epistolary structure was for 19th-century novels. The epistolary outer-wrapper for Fraknenstein really doesn't make it a better novel - there's a reason that retellings of the Frankenstein story drop it - but it doesn't make it an appreciably worse novel, either. I feel the same way about the self-referential framing of Logicomix.
[ add comment ] ( 4758 views ) | [ 0 trackbacks ] | permalink
The blog software I use is a small open source project. It was abandoned and rudderless for a while, but now a new programmer has taken the helm. The result is the first update in a while. It necessitates a change in the appearance of FoE, but otherwise installed smoothly.
If anything has broken, please let me know.
[ add comment ] ( 4481 views ) | [ 0 trackbacks ] | permalink
Brian Leiter mentions the fun to be had with Google's Ngram Viewer, a webpage that graphs the frequency of words or phrases in books over time. Two interesting comparisons:
"Immanuel Kant" versus "Thomas Reid" in English from 1800 to the present. As one might expect, discussion of Kant increases over time. Perhaps surprisingly, discussion of Reid continues at more or less the same level over the whole period.
"Pragmatism" versus "utilitarianism" from 1900 to the present. Pragmatism gets an initial bump when coined but falls off in the second decade of the century. After about 1920, the two are in lockstep with utilitarianism shadowing pragmatism.
Some niggling: There may be some sample selection bias, because it only counts books that Google has scanned. Also, it is only matching whole phrases; it won't aggregate different forms, such as "C.S. Peirce" and "Charles Sanders Peirce".
[ add comment ] ( 4709 views ) | [ 0 trackbacks ] | permalink
<<First <Back | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | Next> Last>>