As artificial intelligence systems become more sophisticated, a disturbing reality looms on the educational horizon, one where both teachers and students become intermediaries in a self-perpetuating cycle of AI-driven instruction. In this analysis, I'll provide real-world examples as evidence that the infrastructure for this impending nightmare is under construction right now.
Prompts to Prime the Pump
The nightmare begins with the automation of assignment creation. AI systems, programmed with curriculum standards and educational objectives, can generate writing prompts and rubrics optimized for learning and for fair and reliable evaluation. A recent scientific study demonstrated incontrovertibly, according to the authors, the superiority of AI-generated prompts over human-written prompts (old-fashioned prompts, in the language of the researchers). Students have different "tastes" in prompts, and evidently AI-generated prompts can reach those tastebuds and inspire more creativity and imagination.
I couldn't find any meaningful explanation of the crucial differences between an AI prompt and an old-fashioned one, a fatal flaw in a study positing a difference between treatment and control group. More fundamentally, the study lacks a theoretical framework to explain why these differences would matter for student learning and development. Without such a framework, the research can't answer the crucial "So what?" question - how do different types of prompts affect students' growth as writers and thinkers?
The study promised to deliver mixed method insights but left out the mix. It's pretty much all numbers. Consequently, I have little hope that such a study would be published in the Harvard Review or Research in the Teaching of English; nonetheless, masquerading as science it shows up in an online search. I'm assuming that the embedded parts of ordinary human delivery of a prompt in class—the part where the teacher leads a discussion of the prompt assignment and helps students shape their responses to it—isn't viewed as inherently connected to an "old-fashioned prompt." We'll never know because there was no qualitative material except for "document analysis," which I presume means "we read the written prompts."
Suppose AI-generated prompts become listed as a "best practice." Developing student writers, recognizing the futility of competing with machine-generated writing, learn to feed these synthetic prompts into AI writing systems. These systems produce essays precisely calibrated to meet the nuances of the prompt, their outputs far more polished and "correct" than anything a developing writer could create. The student's role diminishes to that of a courier, transferring text from one AI system to another. Original unoriginal thought and the charming hiccups in a developing student voice, the failed experiments worth more than compliance, fade away.
Rubrics to Refine and Grade the Value of the Product
The evaluation process completes this closed loop. Systems like StudyFetch already provide instant, targeted feedback on student essays, focusing on structural and mechanical elements such as grammar, style, and organization. Students can use this feedback to refine their drafts repeatedly, potentially feeding the AI's suggestions back into the generator for further revisions.
AI systems scan these submitted essays (themselves AI-generated, remember) and score them against AI-generated predetermined criteria with mechanical consistency. Feedback is automatically generated based on pattern matching, devoid of insight or understanding, focused squarely on pointing out aspects of the writing that do not align with the top score.
In theory, when the human writer has done all they know to do to satisfy the rubric, or when the time is up, or when the AI is directed to, the AI assigns a final grade without human review based purely on how well one exchange series within an AI loop has anticipated the requirements of another. Unfortunately, as teachers learn to trust AI to deliver reasonably reliable grades based on a rubric the AI may have designed, a real possibility given the algorithmic nature of the dominant prompt-rubric framework of ongoing instruction right now, the debate over AI may be over, and AI triumphs.
In practice, students soon learn that they need not play the game of "buttering up the paper for the rubric." They feed the AI-generated feedback back into the generators and prompt for revision, creating an iterative cycle that becomes increasingly divorced from human thought or expression. Each iteration produces work more finely tuned to AI evaluation criteria, but further removed from human communication. The entire process becomes an exercise in optimization rather than learning.
Looking at One Possible World
The system perpetuates itself as AI analysis of performance patterns in student-written essays sparked by AI-generated prompts and refined according to AI generated rubrics leads to automatic adjustments of future prompts and rubrics and increased sophistication of students as AI gamers of a system depending upon letter grades as academic capital. Teaching devolves into helping students better maximize AI tools rather than developing their thinking and writing abilities. Original face-to-face human pedagogy disappears replaced by interface management and prompt engineering. Looking on the bright side, it might work for a short time during a pandemic.
The human cost of this scenario is devastating. Teachers, once mentors in the development of critical thinking and creative expression, become technical facilitators monitoring AI interactions. Students lose opportunities to develop their own experiential background and knowledge about writing, to struggle with complex ideas on their own in a creative flow, to learn from failure, to experience the bone-deep satisfaction of mastering written communication. Writing instruction changes from a crucial part of human development, maturing during lifewide and lifelong experiences, into a purely procedural, technical issue of compliance.
The long-term implications for society are equally troubling. Students graduate without developing evolving writing abilities or critical thinking skills—but they know how to use AI to complete technical writing tasks. Their capacity for nuanced written discourse atrophies, replaced by dependence on AI mediation. The gap between AI-mediated and authentic human communication widens until meaningful written exchange between people becomes increasingly rare.
Lions and Tigers and Bears, Oh My!
This nightmare scenario represents the ultimate failure of educational adaptation to AI. Instead of thoughtfully and professionally teaching student writers how to use these powerful tools to do things students could never do, waiting around to act means students never learn about the writing that only humans can do. Operating for efficiency or out of fear isn't the way to preserve human agency and development. Resisting or doing fragmented local experiments or being sold a story by corporate AI gurus who aren't classroom writing teachers, we allow the technology to subsume the education of human writers. This nightmare serves as a warning about what happens when we fail to maintain human-centered writing education in the face of technological advancement.
The nightmare isn't about academic integrity or plagiarism, important as that issue is. The nightmare is about the potential loss of human development and learning that makes writing meaningful. When AI transforms from a tool to assist human thought and writing into a replacement for human thought and writing, the potential loss is creating a generation unable to think and write for themselves.
This dystopian possibility underscores why developing and researching theoretical frameworks for post-process pedagogy that preserve human agency ought to be top priority. Without shared theoretical frameworks to advance scientific investigations, it is impossible for teachers scattered around the country to contribute to evidence speaking to clear hypotheses amenable to experiments and stubborn paradoxes made visible by qualitative research.
A well-developed theoretical framework under the aegis of the National Council of Teachers of English would help identify where and when AI tools can enhance rather than replace human writing development. For instance, AI might effectively assist in early organizational tasks like outlining or mapping ideas, where its ability to suggest structural patterns can expand student thinking without compromising authentic voice. Similarly, AI could be valuable in late-stage editing when writers are polishing nearly finished work. These hypotheses are testable and researchable.
However, the framework would also clearly delineate and hypothesize about phases where AI intervention could be harmful such as during initial drafting when writers need to develop their own ideas and record echoes of ideas in their own words not immediately attended to consciously, or during revision when writers must learn to value their own work, not rely on a machine to value it for them, and strengthen their writing based on their own judgment. Without such theoretical guidance, teachers and students lack principled ways to determine when AI serves as a helpful tool versus when we risk replacing essential learning.
Thoughtfully incorporating AI tools with integrity and appreciation for what writing means for human beings (the purpose for a theoretical framework) embodied and embedded in a real world (the lives we lead) is crucial for the future of writing education. Without a shared framework, we risk incoherence, turf battles, competition, and all the other bad things deriving from atheoretical anecdotes, risking the sacrifice of the very skills and capacities we’ve taken for granted on the altar of technological efficiency.