11 Comments
User's avatar
Gwen Pauloski's avatar

Yes, yes, and yes. My dissertation research was on the subject of helping adolescents make deep sense of info-texts, and I just submitted my manuscript to Solution Tree on the subject. I have wrestled with the same leaps in logic that you are pointing out, and appreciate the distinctions you draw. I hope you will find my work useful.

Terry Underwood, PhD's avatar

Thanks, Gwen. I’ll check it out. Glad this resonates for you.

Terry Underwood, PhD's avatar

Lots of factors interact to determine whether a text is “readable” by particular categories of people. Some are subjective factors hard to predict—prior knowledge of the topic, interest, language, genre experience for example. Some are situational—purpose, motivation, resources available, immediate context. In the literature these are researched as reader variables. Formulas simplify or ignore reader variables because what is available at the time of measurement is a text. A proxy for “reader” is “grade level,” which is handy because in the U.S. we assemble readers into actual grades and assume that readers have enough in common to benefit from a certain level of instruction. Linguistically, word and sentence length are handy measurements also, and they do distinguish texts with more complex syntax and vocabulary from those with less complex words and syntactic demands. At its core, readability formulas are heuristics, based on many assumptions, which yield rough approximations that seem to have functioned well enough for the factory model of schooling to place bulk orders for school books. Over the years I’ve seen efforts to incorporate formulas in writing instruction as a way to remodel difficult linguistics for less sophisticated audiences, and perhaps people still use them, but my experience tells me that such use often produces unsatisfactory results.

In my opinion, these formulas are interesting to study, perhaps useful in some circumstances, but poor substitutes for human judgment. Because AI functions with the same parameters—some standard display of difficulty levels (1-12 grade levels, x to y as in 1-50) based on measurement needs and purposes, language features like vocabulary and syntax, and past history of a text if available. I like what ACT has been doing using subjective rating systems that use expert teacher judgment to match texts with difficulty levels.

Hope this is useful!

Karen Smiley's avatar

These insights on reading comprehension are so interesting - thank you for sharing, Terry!

I'm just coming up from a deep dive on readability metrics (post coming soon). I've been wondering about the "grade levels" these tools assign, and whether they're still as relevant today as when the metrics were developed and validated.

The standard readability metrics are also primarily focused on words - pure texts - and don't really consider tables, images, structure, or other elements that can help comprehension of content. I'm pondering that too. It seems like a gap.

What do you think?

Terry Underwood, PhD's avatar

Grade level is a poor anchor for readability judgment. Prior knowledge is empirically defensible as a dependent variable because it takes into account reader variables like motivation, interest, vocabulary—all known factors that do impact the degree of readability or accessibility a reader might have given a particular text. So grade level is a convenient metric because it is the organizing principle of US schools, but it works simply as a heuristic, a tactic for ordering large quantities of texts. PK levels 1-12 would likely produce very different distributions of difficulty

Word and sentence length are problematic as well because they discount very important textual features like inferential load, text coherence, perspective or point of view, etc. AI duplicates all of the problems of the formulas.

I taught my teacher candidates that it’s important to judge text difficulty and challenges for a particular set of readers in a given classroom under local conditions in order to design instruction that accommodates students. A difficult text can have short words and sentences while an easy text can have longer words and longer sentences.

I have some posts on readability if you’d like to see. Let me know and I’ll send you a link.

Nick Potkalitsky's avatar

This is amazingly helpful, Terry. This article is cooking. Very much rings true to my experience as a high school teacher. Teacher as the epistemological authority drives textual interactions, closing downs fields of inquiry and engagement left and right. No doubt. Students’ lack of engagement leads to more invasive assessments strategies, closing down the process even more. It is almost as if you really value a textual experience it needs to be an in-class participatory event.

Terry Underwood, PhD's avatar

Buddy reading at a minimum. Reading to learn, not to earn credit, not to write a report or essay. Reading in a community where people have lots avenues to engage a topic, problem, or phenomenon. Community implies collaborative epistemology.

User's avatar
Comment deleted
Jun 7, 2024
Comment deleted
Terry Underwood, PhD's avatar

I am so happy to get this feedback, Rania. I’d love for you to comment further on the dynamics of your struggle. I’m sure you aren’t alone, and your thoughts would help me clarify the total cost of intellectually coercive instruction