The integration of AI writing tools raises thorny questions about writing process and assessment. Many writing teachers, including me, instinctively advocate for students to complete full drafts before consulting AI. This position deserves careful examination. The underlying assumption—that unmediated drafting is necessary for developing ideational fluency and critical thinking—may not hold true for all writers or all writing tasks.
Consider the counterargument: Writers with clear semantic vision, whether for an analytical argument, a vivid description, or a compelling narrative, may benefit from articulating that vision in a carefully engineered prompt, a map of their perceived semantic text-field. Writing this prompt effectively is hardly a shortcut around thinking.
I think of it as crystallizing my intention, an intense form of rhetorical planning demanding clarity in my understanding of my writing goals, audience needs, and my perspective on the textual landscape available to me or needed to contain my semantic vision. In this view, early AI consultation could enhance rather than diminish the writing process. In fact, I see evidence that my efficiency in transforming a semantic vision into a post by way of crystallization has contributed to improvements in effectiveness.
Optimal timing of drafting and first AI contact likely varies based on multiple factors: the specific writing task, the writer's development stage and history, demands on knowledge or experience, and their metacognitive functions.
This complexity suggests the need for careful assessment to guide differentiated instruction. Classroom-based assessments like think-aloud protocols, comparative writing tasks with and without AI, writing process interviews, and self-reflexive analysis of previous writing could help teachers understand which students might benefit from which approaches to AI integration.
Yet this timing question often masks a deeper anxiety about AI detection and surveillance. The focus on when students should use AI sometimes stems from concerns about maintaining academic integrity, leading to an unhealthy quest to unmask AI-generated text. This surveillance mindset damages the trust essential to writing instruction and distracts from our core mission: helping writers communicate effectively with readers.
A reader response approach offers a promising alternative that addresses both the timing and surveillance challenges. Instead of prescribing universal rules about AI interactions or scrutinizing texts for AI markers, we could develop protocols for learning that focus on how writing affects actual readers, our writing community peers, our families and friends, our trusted readers.
These protocols would invite readers to share their honest engagement with texts, nothing fancy, noting moments of connection, confusion, or insight; describing emotional responses; articulating what lingers after reading. Peer feedback typically references a rubric. For this purpose, we need readers as readers, not readers as critics. Real reader feedback provides writers with valuable information about how their writing lands with readers regardless of when (or whether) they consulted AI.
This shift requires targeted classroom-based research to understand how different writers benefit from different approaches to AI integration and how reader response protocols can support their development. We need qualitative data about how writers with varying levels of experience and metacognitive awareness navigate AI tools and how different types of reader and AI feedback inform their process decisions.
The research should examine not just final products but the complete ecology of writing: students' planning and crafting processes, their prompt engineering choices, their revision decisions, and their reflections on what works and why.
The goal isn't to find universal rules about AI integration but to help each student develop the metacognitive skills to make informed choices about when and how to use these tools in their writing process. By focusing on reader response rather than AI detection or rubric-regulated critique, we can create writing environments where decisions about AI integration are guided by rhetorical purpose and reader response rather than surveillance fears.
This reader-centered approach could transform how we teach writing with AI, replacing suspicion with genuine inquiry into what makes writing effective for its intended audience. Through careful classroom research, we can develop evidence-based practices that support writers in making thoughtful choices about AI integration while maintaining focus on what matters most: writers making texts expressing their semantic vision and connecting meaningfully with readers.
Terry, I think your careful discussion here is spot on - AI guidelines in classrooms, especially at the college and adult student level, need to match the kind of writing involved. As you note, genAI could be a good tool for critical thinking for expository, subject-based work. I also agree that the current emphasis on surveillance does no one any favors.
My problem with AI has to do with the way it flattens individual voice and self-expression. It’s where I find myself most at odds with the bloviating of tech executives like Sam Altman, because writing excellent personal nonfiction is about feeling, not just thinking. It can very much be influenced by anchoring biases - the first thing you turn up is the direction you go, but in my experience as a writer and writing instructor, the first thing is often not what you really want to say. I call it “throat-clearing,” which ChatGPT and other bots tend to do very well.
Yes, students could be trained with good AI mentors to focus on what they really want to say, with attention paid to anchoring. But that’s not currently the approach or focus or instructional designers at my institution or in the public debate. That’s what worries me.
Yea, I hear you about possibilities for formative assessment - they could make a real difference, but a whole lot of attitudes need to change to allow me to do it. So, maybe that’s why I’ll develop such a GPT agent. We’ll see. I do agree that the tech bros suck up way too much media air, as does Trump. I’m not so much interested in lambasting them as I am in taking down their amoral “solutions” for education, writing, truth, and the American Tech Way. That’s what powers the Kahnmigo bait you rightly point to as a waste of time. I’d also argue that it strategically avoids discussion of the labor and thought required to create effective curriculum for students whether or not AI is used. This is what I’ll likely talk about if I write about assessment - as it stands, it amounts to invisible labor in the eyes of administrators, politicians, and business executives.