Discussion about this post

User's avatar
Gwen Pauloski's avatar

Yes, yes, and yes. My dissertation research was on the subject of helping adolescents make deep sense of info-texts, and I just submitted my manuscript to Solution Tree on the subject. I have wrestled with the same leaps in logic that you are pointing out, and appreciate the distinctions you draw. I hope you will find my work useful.

Terry Underwood, PhD's avatar

Lots of factors interact to determine whether a text is “readable” by particular categories of people. Some are subjective factors hard to predict—prior knowledge of the topic, interest, language, genre experience for example. Some are situational—purpose, motivation, resources available, immediate context. In the literature these are researched as reader variables. Formulas simplify or ignore reader variables because what is available at the time of measurement is a text. A proxy for “reader” is “grade level,” which is handy because in the U.S. we assemble readers into actual grades and assume that readers have enough in common to benefit from a certain level of instruction. Linguistically, word and sentence length are handy measurements also, and they do distinguish texts with more complex syntax and vocabulary from those with less complex words and syntactic demands. At its core, readability formulas are heuristics, based on many assumptions, which yield rough approximations that seem to have functioned well enough for the factory model of schooling to place bulk orders for school books. Over the years I’ve seen efforts to incorporate formulas in writing instruction as a way to remodel difficult linguistics for less sophisticated audiences, and perhaps people still use them, but my experience tells me that such use often produces unsatisfactory results.

In my opinion, these formulas are interesting to study, perhaps useful in some circumstances, but poor substitutes for human judgment. Because AI functions with the same parameters—some standard display of difficulty levels (1-12 grade levels, x to y as in 1-50) based on measurement needs and purposes, language features like vocabulary and syntax, and past history of a text if available. I like what ACT has been doing using subjective rating systems that use expert teacher judgment to match texts with difficulty levels.

Hope this is useful!

9 more comments...

No posts

Ready for more?