Month: March 2013

Geneticists Date Homer

In “Linguistic Evidence Supports Date for Homeric Epics” Eric Altschuler and colleagues bring “formal statistical modeling of languages” to bear on the question of when the Iliad “was produced.”1 Such an approach might prompt some interesting discussion, but only when informed by relevant expertise. Regrettably, they willfully disregard alternate forms of expertise:2

Our analysis is not informed or constrained in any way by historical, cultural or archeological information about Homer or his works…

It is disconcerting to find scholars so dismissive of pertinent information. Had they consulted with historians, classicists, or archeologists, they might have avoided some of the questionable assumptions that undergird this essay.

The authors assume that the text we have today reflects a particular, definite, discoverable moment in the past. But what moment, if any, does it reflect? The Iliad was composed and transmitted orally long before it was first written down. We have little to no evidence for how the written text varies from the oral tradition that preceded it. The oldest written fragments of the Iliad that survive date from centuries after we think the poem was first committed to writing, and the oldest surviving complete copy is the fabulous Venetus A, which dates from nearly 1800 years later. Therefore, on the basis of the written text alone it is difficult to know what moment if any it reflects.

The first folio of Venatus A, from Center for Hellenic Studies.
The first folio of Venatus A, the earliest complete copy of Homer’s Iliad (from Center for Hellenic Studies).

The authors also assume that the text and modern Greek have varied independently as a function of time and only time. They exclude all complicating factors that might affect both the text itself and the development of modern Greek. That assumption might work for culturally neutral texts, but it seems problematic for more important texts. The Iliad was a culturally dominate text that served as the basis for education for generations. Authors consciously imitated Homeric expressions precisely because they were thought to have been used by Homer.

Altschuler and colleagues are following in the footsteps of earlier researchers who have oversimplified a problem so that they can apply models from the sciences to questions of cultural production and transmission. Those simplifications weaken the argument on which their conclusions rests.3

In the end they offer two conclusions:

our analysis of common vocabulary items in the Iliad increases our confidence in its age and shows how even fictional texts can preserve traces of history.

Neither conclusion is particularly revelatory.

Their analysis suggests as most likely a date of early- to mid-eighth century, depending on how they tweak their model by including or excluding information. Their model does not, however, rule out as impossible dates that seem unreasonable, dates as early as 1351 BCE and as late as 61 BCE (when they chose to include obviously relevant historical evidence, the range narrows to 1157-376 BCE). Archeological and textual evidence suggests that such dates are impossible—archeological evidence suggests that Troy was destroyed in the early-12th century BCE (see here for more on Troy), and Plato quotes Homer in the 4th century BCE.

Their other contribution is equally banal. Historians and textual scholars have known for centuries that “even fictional texts can preserve traces of history.” That fact is one of the cornerstones of textual criticism. Assuming that all texts preserve history, five and a half centuries ago Lorenzo Valla demonstrated that the Donation of Constantine was a late forgery. Historical criticism continues to accept as a truism that all texts preserve history.

Unfortunately, the authors seem to have contempt for other scholarly but non-science disciplines that could have contributed to and improved this project.4 They denigrate the conclusions derived from historical and archeological sources as “historians’ and classicists’ beliefs.” Labeling something a belief characterizes it as opinion, as view, as conviction. Beliefs are not rational and grounded in evidence. Likewise, for historians and classicists there is a “preferred date for Homer.” Like belief, prefer reduces the claim to opinion. In contrast, and despite the absurdity of some of the results offered by their model, they give a “formal quantitative estimate” and offer a “prediction” and estimate “with 95% confidence intervals.” Their model “returns a date for Homer.” Unlike the amateurish methods and conlcusions of historians and classicists, Altschuler and colleagues offer results that overcome the stain of opinion. They are scientific conclusions grounded in a “Bayesian approach” and backed up by 95% confidence intervals.

It is too bad they didn’t consult with classicists, historians, and perhaps archeologists. If refined and developed in consultation with relevant experts, their model might be able to offer interesting insight into historical questions. As it stands, however, they seem to have squandered considerable money on something that doesn’t contribute anything of value.

David Mendelsohn nails it.
Daniel Mendelsohn nails it.

Thanks to Brett Mulligan for offering his expertise and keeping me from making egregious errors.

1 By “produced” it seems they mean something like first written down, though they slide unhelpfully between various ambiguous expressions such as “the age of the Homeric epics,” or “mean estimate for the date of Homer’s works,” or simply “a date for Homer.”.

2 The authors are not alone in denying relevant expertise. In November Gerald Crabtree, a geneticist at Standford, admitted his lack of expertise in a field and then went on to speculate about “one of the most important questions” in that field.

3 See John L. Cisne’s “How Science Survived: Medieval Manuscripts’ “Demography” and Classic Texts’ Extinction,” Science 307 (2005): 1305–07 and the various responses to it. (Unfortunately, all are behind Science’s paywall).

4 Does this make them experts or “so-called experts?” Only Brian Ince knows.

Critical Thinking in Classrooms and Museums

Critical Thinking is Best Taught Outside Classroom” claims that museums, TV shows, and hands-on fairs like the Maker Faire are better at teaching critical thinking skills than the standard classroom setting. Critical thinking here is marked by asking questions like “What if …?” and “How can …?” followed by questions about cause and effect.

While museums can play an important role in helping children develop critical thinking skills, it isn’t clear that museums or other “institutions of informal learning” are better suited to teach those skills. Any critical thinking—asking good questions, as the article would have it—requires more than just an unstructured encounter with some exhibit or device or situation. And learning how to ask a good question requires more than being taught to ask “What if …?” and “How can …?” Good questions require some relevant background knowledge. Children, adolescents, and even adults often lack the knowledge to know how to formulate a good question when looking at an unfamiliar object or artifact. That’s why museums spend so much time, energy, and money designing exhibitions, selecting objects, arranging displays, crafting labels, and training docents and guides. We shouldn’t be surprised that college students were found to ask better questions than fifth graders. College students simply knew more.

The failure of the U.S. school system to teach students to think critically or give them the opportunity to develop the habit of thinking critically is not news to anybody.1 A post over at College Misery summarizes a common lament shared by college and university professors across the country. In Extra Class Hiram suggests that students have never been asked to think:

They come from the worksheet generation. Literally, most of what they’ve done in school is take standardized tests and fill in worksheets. They haven’t been asked to think, or to try to think, or to imagine that thinking means anything. They just have done things, filled in things.

Teaching any students but especially college students to start thinking is laborious and requires not only a conscious decision on the part of professor, but also a commitment by the professor to struggle through a period of resentment and anger. Students get into college because they have done well in school, or at least they have learned how the system works. When given discrete tasks with well defined criteria for success—e.g., a fixed number of questions each with one right and multiple wrong answers or when asked to write 500 words about a particular historical event—they perform admirably. After nearly a decade of acquiring those skills and coming to believe both that they constitute thinking and that school is the application of those skills, students are understandably uncomfortable when professors inform them that neither is true. Student morale continues its downward spiral when professors then demand something new and different. Consequently, teaching students how to think also requires buy-in from the students themselves. Professors have to convince them to give up their comfortable system of right and wrong along with its familiar forms of evaluation that mark success, in exchange for an approach that has neither tidy answers, nor even simple assessment criteria.

In my history of science courses I try to address these challenges by trying to teach curiosity, and modeling curiosity, and showing them how to ask questions, and explaining what makes a good question. I begin by explaining clearly why all this matters, why we are going to make the effort, why they, the students, will benefit from learning a new set of skills. When approached this way, the formal nature of the classroom setting facilitates rather than inhibits teaching students how to think critically.

1 Lambasting the school system for its failures has become a national pastime. Informal learning environments with their higher tolerance for failure, however, are not panaceas. “Informal learning environments tolerate failure better than schools” because such environments generally lack systems of evaluation. Success and failure have no meaning in such environments. Progressing to the next exhibit in a museum does not dependent on having succeeded at the previous one in the way course prerequisites work. If your gadget at the Maker Faire falls apart or fails to work, you aren’t held back until you demonstrate proficiency, as you might be held back a grade or not allowed to graduate. It would be nice to teach students to start thinking at a relatively early age, but that would require changing incentives and rewards that so many students and parents and teachers and school districts and admissions committees and employers and the government have come to expect and depend on.