6 Comments
User's avatar
John Warner's avatar

Having written a book explicitly advocating for "killing" the 5PE and another about how I think we must engage productively with generative AI in teaching writing, including why attempting to ban it is misguided, I agree with much here, but I also find a bit of excess enthusiasm for using LLM outputs to help students develop critical understanding of writing.

For sure, they can be used this way, but I don't see much evidence that doing so is necessarily superior to asking students to engage with actual human writing that has been created out of thought and intention, as opposed to patterns and probabilities.

I'm thinking about the implications of this passage: "Third, analyzing AI outputs cultivates unprecedented critical reading abilities. When students interrogate machine-generated text in a community, they naturally develop rhetorical and critical analysis skills that transfer to all reading contexts. They learn to identify patterns, question assumptions, detect biases, evaluate claims, and recognize tone—all while comparing the output against their intentions. This dialectical relationship between human prompt and machine response creates a uniquely powerful space for developing critical consciousness about language itself as a social semiotic system. Students begin noticing linguistic patterns they previously processed unconsciously, accelerating their development as sophisticated readers."

"Unprecedented critical reading abilities" is surely hyperbole, no? Everything you describe here can be and has been done with writing, and it can be done in ways that include noticing "linguistic patterns" while also requiring students to consider rhetorical intent. (I talk about an exercise I've used for many years to achieve this goal that utilizes a short passage from a David Foster Wallace essay on cruise ships.)

I think you're right that these tools can offer a dynamic lesson in "language," but I believe the more lasting and meaningful engagement for students when it comes to building their writing practices is not just with language, but with ideas, both their own and others. LLMs, as non-thinking, non-feeling, non-experiencing technology has no real capacity for generating ideas, at least not ones that matter, given they are literally disembodied.

I'm not saying that these activities are worthless because I believe they're worthwhile, but I also think it's important to consider their limits, which are real and important.

Expand full comment
Terry underwood's avatar

It's important to read what I said, John. I make no claim that AI will help students develop a "critical understanding of writing" as you say in your opening comment. I do explicit claim "Unprecedented opportunities for critical reading," and I mean every word. They ARE unprecedented in a radical, not a hyperbolic, sense in terms of critical reading, interrogating a text, forensic reading. Humans have never faced such a reading task before. Truly unprecedented. You may think I'm splitting hairs when I insist on differentiating reading and writing in this instance, but it makes all the difference. Also, I agree with you that we all need to prioritize, prize, privilege, and participate in the reading of literature, by which I mean human (I insist also the bot cannot write literature and will never even simulate it). For my part, I am laser focused on specifying precisely what those affordances and constraints are in terms of both ideological and autonomous literacy practices. There are too many generalities floating around. "Everything you describe here can and has been done with [human] writing," you say. Everything???? I think not. John, I humbly beseech you to look objectively past tradition and at reality. High school teachers don't have the luxury of treating the bot as some mystical demon or whatever. It is a computer, it functions in predictable and explainable ways following something like 20 well-understood algorithms (I'm not an expert in the computational end of it), and it is a serious mistake to conflate its output with either human writing/text or human reading/response--but the kids don't know that. So your comparison of "everything can be done with human writing" is nonsense. It can be done with human writing only if you ignore the bot, what the bots actually does, write it off, and argue "let's keep doing what we've been doing even though our students have this unprecedented--yes, unprecedented--simulated language machine that popped out of the oven. Here's where I get hyperbolic: How can such brilliant people completely misunderstand this reality? I see it everywhere--the CCCC is crazy with it. There should be a co-operative effort to understand rather than build bunkers.

Expand full comment
John Warner's avatar

There's a number of statements here that I think need significantly more support. Perhaps you've written in more detail on these views elsewhere, in which case I'd be happy to read those thoughts. But to put some of this on the table for this discussion.

- "They ARE unprecedented in a radical, not a hyperbolic, sense in terms of critical reading, interrogating a text, forensic reading. Humans have never faced such a reading task before."

I don't know how this is can be true in a world where linguistics has existed as a field for quite a long time. If you mean "forensic reading" as a new category of activity unique to LLM outputs, then what is the point of this activity? If it's to understand how the LLM models work I'd ask what role that has as an aspect of developing critical literacy that's meaningful to students.

- "There are too many generalities floating around. "Everything you describe here can and has been done with [human] writing," you say. Everything???? I think not. John, I humbly beseech you to look objectively past tradition and at reality. High school teachers don't have the luxury of treating the bot as some mystical demon or whatever"

This strikes me as a non-sequitur and I don't understand it. I've made no argument as to tradition and no claim around how or what high school teachers should treat the technology as. As it happens I wrote a whole book about this and speak frequently on this topic to just these audiences and I definitely don't recommend anyone treat it as "a mystical demon." You're yelling at a cloud here.

When I say everything can be done with human writing I mean that the critical study of language does not require synthetic text for us to study the effects of language. What is special about the LLM output that allows us to do something new? What is this new thing that we could not do before and therefore must do?

- "It is a computer, it functions in predictable and explainable ways following something like 20 well-understood algorithms (I'm not an expert in the computational end of it), and it is a serious mistake to conflate its output with either human writing/text or human reading/response--but the kids don't know that."

This is not true. One of the most noted hallmarks of LLM output is that it is unpredictable. The existence of hallucinations demonstrates this. The fact that when models change previous processes that produced semi-predictable outputs no longer work is another. Yes, we have a strong sense of the algorithmic process of pattern matching that is the engine of this technology and it is very much not like what humans do, but that fact by itself doesn't suggest we need to make special study of its outputs in regular school contexts, does it?

- "let's keep doing what we've been doing even though our students have this unprecedented--yes, unprecedented--simulated language machine that popped out of the oven. Here's where I get hyperbolic: How can such brilliant people completely misunderstand this reality? I see it everywhere--the CCCC is crazy with it. There should be a co-operative effort to understand rather than build bunkers."

Again, I don't know who this is directed at, but it shouldn't be me because I don't hold any of these views. I've been arguing against doing what we've been doing long before this technology arrived. The first thing I published about the technology argues that we should use it as a lens through which we consider the important work of reading and writing in order to preserve that which is worth doing. The subtitle of my book is "how to think about writing in the age of AI" which hardly seems like a bunker mentality.

I accept that you feel I've misunderstood your post which is entirely possible, but turning that misunderstanding into a rant that has little to do with what I've said or believe strikes me as a waste of everyone's time.

Expand full comment
Terry underwood's avatar

Apologies, John. I don’t mean to rant. I don’t like all the ranting, either, and am embarrassed to become one myself. It’s a waste of time trying to defend my conclusion that we need to teach students how to read and write now that AI is in the world. I may be wrong, but that’s where I see you coming from. You think there is no need to deal with AI in literacy instruction. We can accomplish all we need to do with human texts. I disagree. I just published a post I’ve been working on for a while that seeks to explain why. Btw, I mentioned the College Conference on Composition and Communication because I read their position statement on AI a few days ago. It essentially says the same thing: The onus is on those who conclude that AI should be invited into the classroom (vs. prohibited) to prove their case. In my mind, AI is already in rhetorical classroom. We need to get on the ball to understand how it is impacting students and what we can do to support them during this transition.

Expand full comment
John Warner's avatar

"You think there is no need to deal with AI in literacy instruction."

I don't know how to say this more clearly, but I think the opposite of this and wrote an entire book explaining why, and how I believe we should be thinking about this challenge so as to retain and even enhance our human capacities. I'd be happy to send you a copy of the book if you DM me an address. I have lots of spare copies.

I've not seen the CCCC statement, but to the extent it reflections your mischaracterization of my own stance I would say I disagree with them. I discuss in my book and elsewhere (using Marc Watkins' framing) how AI is "unavoidable, but not inevitable", that we must be aware of and mindful about the technology, but that doesn't mean we have to leap into its arms as the future. (Which I think you'd agree with.) The CCCC statement I am familiar with from their joint task force does not resemble what you're describing, so I'm assuming it's something new. https://hcommons.org/app/uploads/sites/1003160/2023/07/MLA-CCCC-Joint-Task-Force-on-Writing-and-AI-Working-Paper-1.pdf

I do think that individual instructors should be empowered to reject the use of the technology in their courses because I think individuals deserve that freedom and because there are many valid ethical and moral positions that may lead someone to not want to use the technology.

Expand full comment
Terry underwood's avatar

Thanks again, John. I’d love to have a copy of your book. I’ve seen it referenced on a lot of Substack posts. Re the CCCC statement—I think I was looking at an earlier draft. The keynote at the April 9 CCCC convention was a bit concerning to me because it was a strong call to reign in the bot even though the conference offered sessions on instructional applications. The keynote speaker, the President, is clearly antagonistic toward any use of bots in English classes. My interest in this issue derives from my work in the field of Language and Literacy. My doctorate is focused on both composition theory and comprehension theory, and my dissertation focused on portfolio assessment. I was awarded a Promising Researcher award from NCTE in 1996. My undergrad and graduate (M.A. work) was in English with a heavy emphasis in linguistics. I think my motive has less to do with concerns for teachers than for students in this case, which is typical for me. During my ten years teaching in the K12 system, I found myself dismayed at the weight of tradition even in the face of compelling evidence to change. Teachers with tenure do whatever they please, including using photocopied worksheets day in and day out. Probably the biggest issue where we differ is on whether teachers should have the authority to opt out of teaching about AI. I do understand the sticky moral dilemmas that arise when using AI, but the larger ethical principle to serve children at the point of need overrides. They need to learn under a wise mentor how to resolve these moral issues and make the best choices. I have the same problem with phonics in first grade. I don’t think teachers should have the authority to opt out, though I do think they should have the professional prerogative to teach it according to their experience and professional values. In a private school, I would have no comment, though I still think it’s unethical not to teach students about an innovation that is turning upside down every discipline and career they will face. The benefits of AI are many as well, but they are much more difficult to teach than many writing instructions are ready for. I’ve been field testing Mentor Prompts in a senior high English class, having phone interviews with students, and their pure relief at having someone to talk to about AI is concerning. Their teachers don’t want anything to do with it—for moral reasons. Regardless of what I or you think about it, they are getting scarred from strict dressing downs about cheating and plagiarism and want their teachers to teach them not preach at them. My friends tell me consistently “You are always teaching, Underwood.” From my first teaching job in the community college to my stint in fourth grade, then 7th grade, then in a Curriculum Office as a traveling demo teacher, then my work at UC Davis, then 17 years in a teacher preparation program at CSU Sacramento. I have two books myself, both on portfolio assessment in the English classroom. I’ve always been committed to teaching the writer, not the writing; teaching the reader, not the reading. If I understand you right, you believe AI ought to be taught in schools, but you believe teachers should have the authority to opt in or out. Is that accurate? That may be the only difference in our positions.

Thanks again for offering to send me your book. I’ll DM you with my address. I hope I haven’t put you off from reading my stuff here. I do have strong opinions—I was a child of rural poverty, one of thirteen children, and this background has made me hyper-child centered primarily because I grew up in a world that was not.

Expand full comment