Written communication for a specific audience and a specific purpose entails careful attention to content that fulfills the purpose, to inferences that can safely go unstated, to prior knowledge which may need building or refreshing, to emphasis on important points, to tone and register and persona, to emotional resonance, word choice, and a range of genre-related, often ideological concerns. Rather than being surprised at how weak our writing skills are in this country1, I’m surprised that our citizens are doing as well as they are given the snowballing complexities of the past few decades in public education.
Bureaucratic written communication for a nameless, faceless, unknown audience disappears all the careful attention to writing process strategies delineated above. If my audience is bureaucratic or non-existent in any real sense, I can fill my essay with content that fulfills my purpose whether or not it works for an audience. I need not worry about inferences my audience might take away because my audience is irrelevant so long as my premises are warranted by evidence, and my conventions are solid.
Teaching high school students of writing to compose an essay for an academic purpose—to demonstrate the capacity to write an essay—in a task-controlled environment is like teaching someone to cook by forcing them to follow recipes with exact measurements down to the milligram, without giving them opportunities to explore the principles of flavor, texture, or technique. They might produce an edible dish for assessment, but they haven't learned how to cook—to understand ingredients, adapt to circumstances, or create something meaningful that responds to the actual needs of those who will eat it.
From recent structured conversations with high school seniors about LLMs in the writing classroom, I discerned an academic audience theme in their experiences rooted in symbolic power as distinct from a full bodied conception of living audience and could infer its impact on their conceptions of the writing process. The past few years as they watched AI explode like wild fire in dry timber, some paid little attention initially until they learned about other students who had no qualms about turning in simulated essays, especially when more important assignments were consuming their time and effort. The students I talked with were disgusted with offloads and freeloads and affirmed that “cheating” was not the answer, though a few told me that when their back is against the wall, they used a bot.
Their definition of cheating, however, was more about “disobeying the rules of the teacher” than it was about “flouting academic integrity and destroying authentic human writing.” Their disgust for cheaters arose from what seems like a wholesale disrespect for the role of the teacher rather than a lack of a valuing of academic integrity as it relates to academic freedom, intellectual property, and moral integrity. Empirically, this distinction surfaced a problem for writing instruction in how we as teachers conceptualize academic integrity in controlled writing environments and how students take in this message—or don’t. Respect for a teacher may be the starting point for understanding academic integrity, but it isn’t a mature grasp of the importance for the Sciences for the Humanities, and for the Professions.
They could talk immediately and confidently about their experiences with writing prompts, assignments, rubrics, and revision—I needed to get a preliminary view of their understanding of writing to get to what I really wanted to learn from them. It became clear that in lieu of constructing a functional sense of a particular audience and purpose early in their writing process, they had learned to read their teacher’s directions carefully and to follow the rubric. The mark of success lay in compliance and checking off the boxes in the rubric. One student commented that it was actually fun and sort of relaxing, like a puzzle, easy once you understand the steps. In essence, the rubric became a proxy audience.
Peer response groups also were often keyed to the rubric. From their narrations I could picture students working quietly in small groups, the only noise the muffled scratching of pens or pencils annotating the draft according to the rubric, a form of simulated collaborative learning. Instead of orally sharing their spontaneous responses to drafts, these sessions suggested assembly line quality control work. When personal feedback was given, it was in writing, and it was linked specifically to the rubric.
Even during these peer feedback sessions, the rubric served as the audience. In this clinical environment, the authentic voices of real readers—with unique reactions, confusions, questions, and insights—were silenced in favor of standardized criteria, transforming what could have been a dialogue about ideas and expression into an assessment exercise where students behaved not as engaged thinkers but as surrogate graders for the institution.
It’s not surprising that this structured, performance-focused posture toward teaching writing in the high school English classroom became discernible in their interviews. I can’t claim any special knowledge about how widespread this model of writing instruction is nor how faithfully high school English teachers implement it, but I have read about and talked with colleagues in teacher preparation enough to have evidence that this bureaucratic protocol is part of instructional practice, not a straw man I’m making up to knock down. The Common Core standards are focused on writing instruction as a tool for teaching critical thinking. Moreover, the Core expects writing instruction to emerge from close textual analysis. Teachers have implemented these standards to the degree that, especially for younger teachers, there is no other way.
In theory, one might argue that when we teach writing primarily as proving arguments about interpretations of texts, we narrow the scope of writing to analytical performance rather than authentic communication. Students learn to produce formulaic responses that demonstrate critical thinking within confined parameters, but they rarely engage with writing as a meaningful act of discovery or expression for a real audience. This approach, while more manageable for overwhelmed teachers with heavy work loads, reduces writing to a technical exercise in textual dissection rather than a dynamic process of developing ideas.
The Common Core's emphasis on close textual analysis demands this approach, positioning writing as subordinate to reading rather than a powerful mode of thinking in its own right. Students consequently could easily develop a limited mindset of writing's purpose and potential, seeing it primarily as a means to demonstrate persuasive competence in regard to textual interpretation rather than understanding that writing is a way to explore, communicate, and transform perspectives
Unfortunately, though the number of students who are handing in AI slop for a grade is unknown, I suspect, based on this admittedly small sample, that at least some students have internalized the idea that writing is just a language game with rules like playing Scrabble—with sentences instead of words. Even before Common Core, audience awareness was an abstraction mentioned clinically in rubrics or announced as a figment of the imagination or ignored completely.
Doubly unfortunate, this tension forces writing teachers to navigate conflicting roles, good cop v. bad cop, sensitive reader moved by poignant or intelligent passages v. skilled rhetorician and grammarian with a rubric and a red pencil. In the end, it is the teacher who decides whether the writing works, not the student in negotiation with the teacher, weighing audience responses. The audience has no role in the matter. Rather than taking in genuine reader responses to a carefully designed rough draft to find out how it worked, student writers wait patiently for a critical analysis, a paper touch up, a grade and, hopefully, a smily face.
Triply unfortunate, teachers are forced to consider extraneous issues far afield from teaching a writer in the process of writing when issuing a grade. Gone are the days when high school writing teachers had the prerogative to value the writing of poetry, say, or autobiographical writing. They must think, albeit unconsciously sometimes, about responsibilities to the curriculum, to the Department, to college admissions committee members, to the child’s parents, to the field regarding the problem of grade integrity and fairness, and how awful it feels to be in this position.
Enter AI, the language machine.
I’m starting to wonder whether the resistance to LLMs doesn’t derive from using the word “intelligence” to characterize their capacity. They’re really more like language machines. The moniker “artifical intelligence” was coined in 1957 as an advertisement to lure top-notch computer scientists to a conference at Princeton during the summer. LLMs simulate human language the way cubic zirconia simulates diamonds. Both present convincing external characteristics of their authentic counterparts—an LLM produces language that seems intelligent and reasoned, just as cubic zirconia displays the brilliance and clarity of a real, perfect diamond. Teachers are left trying to distinguish real diamonds from laboratory-made diamonds in a weird inversion: Very often the cubic zirconia essays are better than the real diamonds.
This misalignment between terminology (intelligence machine) and reality (language machine) leads to either overestimation of LLMs' capabilities or dismissal of their significant practical applications. A more precise framework would acknowledge these systems as sophisticated pattern-recognition tools that can perform language-related tasks, while clearly distinguishing their operation from human thought and language use. By recalibrating our understandings, terminology and expectations, we can better appreciate LLMs for what they are: powerful computational tools that excel in simulating human language use by assembling semantic traces in algorithmic syntax according to statistical probabilities.
I’m not sanguine about using LLMs to evaluate student papers if it means diminishing the role of real readers of those papers. I’m not sanguine about the practice of a human teacher evaluating papers mechanically via a rubric for the same reason—readers don’t read with a rubric. In a writing classroom, writing projects must be undertaken by humans for humans, developed, published, and read by a human audience. The writer’s own evaluation of effectiveness is far more important than anyone else’s, for that’s where durable learning takes place, and without contact with an audience, writers miss perhaps the most valuable feedback on their work. Now, to the degree students experience extreme bureaucratic writing tasks for a non-human audience, simulated language machines doing ghost writing for students will be hard to guard against. How many people wear cubic zirconia, and is there anything wrong with that?
There are good counterarguments to my position which must be taken into account pragmatically in any restructuring of writing instruction. Standards provide necessary structure, some will say. While rubrics and standardized approaches might seem limiting, they ensure all students develop fundamental writing skills. Without these structures, disparities in writing instruction might widen, especially for students with less access to quality education.
Academic writing has legitimate purposes, others will note. The focus on argumentative and analytical writing prepares students for college-level work and professional environments where such skills are essential. These forms have their own authentic purposes beyond "bureaucratic exercises." Real-world writing often has constraints, for sure. Many professional writing contexts (legal documents, business communications, technical writing) require adherence to specific formats and conventions. Learning to write within constraints is itself an authentic skill.
Skilled teachers can help students imagine and write for different audiences within the academic framework, developing what rhetorical theorists might call "audience awareness" rather than direct audience engagement. Moreover, assessment necessitates standardization. To fairly evaluate student work across classrooms and schools, some degree of standardization through rubrics may be appropriate. This layer of accountability doesn't mean the instruction itself must be mechanical.
Some will say I’ve created a false dichotomy. I agree. I intended to heighten the sense of a dichotomy to present the extremes (idealized writing instruction in writing communities vs. writing instruction where students easily succeed with simulated essays). Understand that I understand this essay presents a false dichotomy for a very good reason: Sometimes it takes a hard-nosed look at the extremes to find a meeting place in the middle.
Good writing instruction can mediate both standards and creativity, mechanical skills and human voice and response. Yes, the either/or framing oversimplifies complex classroom realities. Resource constraints are real. The ideal writing classroom requires smaller class sizes, more teacher time, and different assessment structures, realities that many schools simply cannot implement given current resources. By the same token, giving free reign to bots without intensive instruction and guided practice in its use is dangerous. But the downstream costs of letting things go on as they are—the downstream costs are potentially devastating for students, for teachers, for writing, and for society.
The human can disappear from writing if we let it happen by digging ourselves into ideological camps and false dichotomies and screaming from the rooftops about the death of the Humanities. We should join together to defeat the movement to dismantle the federal Department of Education and instead strengthen it, the only organizational structure with the scope and resources to coordinate a national response to issues like AI and writing instruction.
Thanks for reading Learning to Read, Reading to Learn! Subscribe for free to receive new posts and support my work.
Under ordinary circumstances, I would feel obligated to cite sources for this assertion. Art would be the easiest thing to do. However, these are not ordinary circumstances we are in today on the issue of writing instruction, and citing a source would be tantamount to suggesting there could be an alternate perspective. In this instance, I feel confident in my views on this issue given my background and experiences. I’ve documented and commented on this issue in one fashion or another in the 500+ posts I’ve written on Substack over the past three years. If you would like to discuss how I know this fact, message or email me with some sense of the kind of evidence you would need to see to accept that the assertion is real.
Thanks Terry for your thoughtful essay here. When you said, "LLMs simulate human language the way cubic zirconia simulates diamonds", making a case that these are language machines and not intelligence machines, I found the comparison helpful. I may use this with the teachers I am coaching right now around assessment, A.I., and writing.
Terry - high school ELA teacher here - your suspicion is accurate about the "Language Arts" in at least the three public schools I have taught at in Western PA. Not a fixed truth, but certainly that is the sort of systematic entropy that can set in for the instructor and students. Great entry here from you! I emphasize embodied, disembodied, and re-embodied ethics at the core of the AI and Ethics class I taught last semester, and also note early and repeat somewhat often what you emphasized about the use of the phrase "artificial intelligence" - though I start at the Dartmouth conference a year earlier so I can emphasize that these "language models" are really more math computations from mathematicians. How much direct mathematical computation do I want in my students' writing? How much math, computational or not, do most readers want in their poetry or prose or plays or short stories? Sure., there is syntax...but maybe E.E. Cummings had a point about feelings.