Empirical research on real-world uses of language bots in higher education classrooms has picked up steam over the past year. A provisional, well-supported, pragmatic framework for thinking about AI-integrated writing instruction in middle and high school is coming to a theater near you. Unfortunately, precious little research has been published specifically describing findings from live classrooms. Nonetheless, instead of denial, “so what do we do now?” is seeping through the angst.
This moment reinforces the need to make changes in business as usual in the writing classroom. Anyone who has taught writing has to ask “Yes, but what changes? Can you be more specific?”
AI has brought the issue to a head. The sentiment is building. Writing instruction must move beyond formulaic instruction toward approaches that increase student motivation and self-assessment capacity. In this moment of rapid change, it is essential to heed Anderson's (20221) warning against "technophilia"— the uncritical acceptance of new technologies as inherently beneficial:
“Despite the imposition of technology in most aspects of teaching and learning, little attention has been given to the implications educational technology has for healthy student development, humane pedagogy, teacher labor, academic freedom, and the aims of social justice. Rather than merely a set of neutral tools, educational technology is bound up with systems of power and privilege” (Anderson, 2022).
This caution is particularly relevant to AI writing tools, where institutional pressures for "innovation" and "efficiency" may push for adoption without sufficient consideration of pedagogical and ethical implications. Ironically, this same pressure can back teachers into a corner, rightfully defending their professional prerogative to practice without fear or favor.
A useful example: automated essay scoring. Technophiliacs are enthralled. These machines can grade more papers in a minute than a teacher can grade in a week. But at what cost? Machine scoring works when large batches of cookie cutter texts at varying levels of quality are used to train the machine. The challenge before this generation of teachers is to stop thinking of writing instruction as a cookie cutter task.
The challenge is to find approaches that neither uncritically embrace nor reflexively reject AI, but rather position it within student-centered writing instruction that prioritizes human development and agency. Every school needs technical and pedagogical AI leadership with reassigned time to lead the faculty in finding positive uses of AI rather than writing policies that diminish innovation and defend the status quo.
The Formula Problem in an AI Context
The dominance of formulaic writing instruction in U.S. schools has been well documented over decades and has created particular vulnerabilities in the age of AI. When writing is reduced to acts of following templates or formulas often memorialized in catchy acronyms, students may see little difference between producing formulaic text themselves and having AI generate it.
However, research evidence supports the argument that students who are autonomously motivated and understand writing as meaningful expression are less likely to engage in academic dishonesty (Stephens & Wangaard, 20132) and therefore more likely to see AI tools as aids to their own authentic writing process rather than replacements for it.
As Anderson (2022) argued, educational technology is often promoted through market-based logic that prioritizes efficiency and standardization over human development. Buyer beware. This neoliberal framework threatens to position AI as simply another tool for standardization and optimization of writing instruction, rather than as a potential aid to authentic student development.
Without structural changes like smaller writing class sizes, organized writing across the curriculum resources, student publication for the community rather than submission to a teacher as an examiner audience, potential for authentic and autonomous student development remains what it is today. AI remains a threat rather than a tool.
Building Motivation and Assessment Capacity
Recent research points to two key factors that can help students develop both writing competence and ethical approaches to AI use. Each factor calls for reconfigured participation frameworks for writing activity.
Autonomous Motivation
De Smedt et al. (20193) demonstrated that students who write from internal motivation rather than external pressure produce better writing and engage more deeply in the writing process. This internal motivation becomes crucial as AI tools become more accessible. Students need to value their own ideas enough to use AI thoughtfully rather than surrendering their writing agency to it.
Student-Constructed Assessment Criteria
When students participate in developing writing assessment criteria through co-constructed rubrics, they gain deeper understanding of what makes writing effective (Abdul Ghaffar et al., 20204). They also gain insight into the writing task, i.e., values that should be reflected in the product. This understanding is essential for responsible AI use: Students who can evaluate writing quality can better assess AI outputs and engage them appropriately.
Reimagining AI Integration Through Student Agency
Research findings already support productive ways to incorporate AI tools into writing instruction while avoiding what Anderson (2022) calls the "darker side of innovation." For example:
Revision Partner Model
Students who have helped develop assessment criteria can use AI to generate multiple revision suggestions for their drafts, then apply their assessment knowledge to evaluate which suggestions actually improve their writing. The goal is not to accept AI revisions wholesale but to engage critically with them based on established quality criteria. This procedure can be mirrored by peer sessions examining AI feedback collectively.
Idea Generation Assistant
Rather than having AI write entire texts, autonomously motivated students can use it to expand their thinking while maintaining ownership of their writing. For example, a student writing about climate change might prompt the AI to generate unexpected perspectives or connections, then evaluate which ideas advance their own argument.
This use of AI is analogous to and can be triangulated by holding interviews with people knowledgeable about the topic. In both instances the writer is collecting information from sources of varying degrees of reliability and relevance and activating a critical eye.
Additional Models for AI Integration in Student-Centered Writing
Collaborative Revision and Editing Workshops
Students who have participated in developing assessment criteria can work in peer groups to analyze AI-generated ideas for revisions of their work. When students already understand quality criteria through co-construction of rubrics, they can engage in sophisticated discussions about whether AI revisions strengthen or weaken aspects like tone, style, clarity, and purpose.
This approach aligns with what Anderson (2022) calls the need for "critical dialogue" about technology's role in learning, while building on De Smedt et al.'s (2019) findings about the benefits of peer collaboration for autonomous motivation. Future research on pedagogical enactment of these critical dialogues could become a unifying research theme.
Genre Analysis Practice
Students can use AI to generate examples of different writing genres, then apply co-constructed genre rubrics to analyze how well these examples meet genre expectations. This builds on Abdul Ghaffar et al.'s (2020) finding that students who help create assessment criteria including genre knowledge develop stronger genre awareness and metalinguistic understanding.
For example, before engaging in genre-based writing practice, say, writing a memoir, students might listen actively to a lecture about the genre before working collaboratively to design a ‘genre rubric,’ then engage AI in an analysis of their rubric draft in small huddles, and via a teacher-coordinated class discussion create a democratic ‘genre rubric’ with examples of published essays as anchors.
Implementation Considerations and Challenges
Establishing Ethical Frameworks
Before introducing AI tools, teachers should engage students in developing shared principles for ethical AI use. This builds on the same student agency that makes co-constructed rubrics effective. As Anderson (2022) emphasized, technology integration must be guided by ethical considerations rather than market or related political pressures.
Principles might include acknowledging AI assistance, using AI to enhance rather than replace student thinking, maintaining final human authority over writing decisions, and protecting student privacy in AI interactions. When and how to cite uses of AI in text or reference list is still an unsettled question in the macrosphere, but writing teachers must engage students in being transparent when an AI transaction contributed a significant portion of the text,
Scaffolding AI Literacy
Just as writing instruction requires explicit teaching of strategies (Graham & Perin, 20075), students need strategic learning to craft effective AI prompts. Sustaining executive control over AI sessions (leading AI output toward the writer’s task destination) can spell the difference between productive Q and A and fishing expeditions. Prompting AI effectively is as nuanced as writing itself and requires a clear writing objective even as that objective could be full blown exploration.
Evaluating AI outputs critically is a subset of applying a critical reading toolset. Human texts call upon readers to assess the intention of the writer based on linguistic data as a prerequisite to evaluating information. AI-generated texts, however, have no “intention” to evaluate except for what the user provides. AI failures in the form of irrelevancy, hallucinations, racial or gender bias, misinformation, and more, are often the result of something in an immediate or past prompt or an inherent limitation of the tool. Making this critical evaluation of AI output accurately is a measure of AI literacy.
Understanding AI limitations is crucial, but applying this understanding is a skill learned only by systematic reflection on experience. Novice users enthralled by the magic or seemingly human nature of the machine gradually come to change this mindset and seem instinctively to see through the illusion.
With explicit instruction and practice in a legitimate task environment, however, this reflexive sense that AI is a machine may not automatically impact how students use AI. Giving students simple, straightforward explanations about computational linguistics, giving student writers monitored freedom to decide the direction of their writing, AI limitations may come to appear like the center divide on a highway—it’s not just an instinct to stay in one’s lane, but an ever present duty to keep to the human side of the road.
Respecting student voice while using AI assistance can be a learned disposition from writing experience inside a familiar community in which publication, not points, is the norm, but may be difficult to directly teach except through modeling. Students may not be accustomed to respecting their individual powers, having been socialized in a traditional approach which values adherence to someone else’s intentions and voice—a kind of “acting like a writer” vs “being a writer.”
Adapting Assessment Practices
Traditional writing assessment may need updating to account for appropriate AI use. Building on Panadero and Jonsson's (20136) work on formative assessment, this might involve process documentation showing how AI was used, assessing a students' critical evaluation of AI suggestions, valuing unique student perspectives over technical perfection, and assessing revision decisions in addition to final products.
The success of effective writing in the real world is almost always determined by an audience. Publication activity in a middle or high school classroom can never reflect the cauldron of the world as an audience. For one thing, middle and high school students are not mini adults, and teachers must protect their vulnerabilities. But evaluating writing in an exclusively teacher-as-examiner setting risks undermining the autonomous mindset student writers need to take up a command position toward the AI tool.
Resisting Market-based Frameworks
As Anderson (2022) argued, educational technology is often promoted through market-based logic that prioritizes efficiency and standardization over human development. The bewildering array of AI wizards bandied about in advertisements—tutors, problem solvers, pick me pick bots promising miracles—can be intimidating.
To avoid this trap with AI tools, acquisition of writing instruction tools should prioritize pedagogical needs over technological capabilities. AI companies must be brought to heel. It’s the height of absurdity to conclude that OpenAI knows how to teach students to write. From this premise flows this inference. AI companies have no business marketing AI tools as writing instructional tools.
In the writing classroom, choosing a commercial package promising pie in the sky may work as well as those scripted lessons teachers are being assigned to use. Maintaining focus on pedagogical needs, student agency and critical thinking is a core concern. Ironically, the built-in motive of the bot is to please the user. Helping students understand that this motive is a double-edged sword comes from teaching, not preaching, not proselytizing.
Resist pressure to adopt AI simply because it appears "innovative.” Consider how AI use affects teacher autonomy and labor. Evaluate AI integration based on learning outcomes rather than efficiency metrics.
Above all, avoid making a connection between AI and improved standardized test scores. AI used in this fashion is an intensification of the factory model responsible for disguising and rationalizing a false meritocracy. In many ways, the standardized test interest has already taken the classroom away from teachers and learners.
AI literacy is a brand new phenomenon. Nobody knows how AI and standardized tests interact, but we do know the effects of standardized schools on children, the haves and the have nots. Choosing tools for instruction must be done to serve pedagogy, not to serve lagging indicators like standardized tests.
Takeaway
The emergence of AI writing tools makes it more crucial than ever to move beyond formulaic instruction toward approaches that build student autonomy as well as self-assessment and self-regulatory capacity. When students develop autonomous motivation through co-constructed genre and task criteria and exercise this motivation in meaningful writing experiences, it is reasonable to assume they are better positioned to use AI tools responsibly and productively—and to grow into their future guaranteed to feature AI.
The integration of AI in writing instruction must avoid what Anderson (2022) calls the "darker side of innovation" where technology adoption is driven by market forces rather than pedagogical wisdom. By grounding AI use in student agency, teachers can work toward what Anderson advocated: critical dialogue about the appropriate role of technology in teaching and learning.
As Reddy and Andrade (20107) argued, assessment practices should empower students rather than constrain them. This principle becomes even more crucial in the age of AI. The future of writing instruction lies not in trying to bury or outrun AI, but in helping students develop the internal motivation and critical capacity to use it for their own interests in community. This requires moving beyond both traditional formulas and knee-jerk resistance to AI toward approaches that build student agency, judgment and authentic writing engagement.
References
Abdul Ghaffar, M., Khairallah, M., & Salloum, S. (2020). Co-constructed rubrics and assessment for learning: The impact on middle school students' attitudes and writing skills. Assessing Writing, 45, 100468.
Anderson, M. (2022). Public Education in the Digital Age: Neoliberalism, EdTech, and the Future of Our Schools. New York: Routledge.
Bazerman, C., Applebee, A. N., Berninger, V. W., Brandt, D., Graham, S., Matsuda, P. K., ... & Schleppegrell, M. (2017). Taking the long view on writing development. Research in the Teaching of English, 51(3), 351-360.
De Smedt, F., Graham, S., & Van Keer, H. (2019). The bright and dark side of writing motivation: Effects of explicit instruction and peer assistance. The Journal of Educational Research, 112(2), 152-167.
Graham, S., & Perin, D. (2007). Writing next: Effective strategies to improve writing of adolescents in middle and high schools. Carnegie Corporation of New York.
Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129-144.
Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment and Evaluation in Higher Education, 35(4), 435-448.
Stephens, J. M., & Wangaard, D. B. (2013). Using the epidemic of academic dishonesty as an opportunity for character education: A three-year mixed methods study (with mixed results). Peabody Journal of Education, 88(2), 159-179.
https://www.routledge.com/Public-Education-in-the-Digital-Age-Neoliberalism-EdTech-and-the-Future-of-Our-Schools/Anderson/p/book/9781032382791
https://www.tandfonline.com/doi/abs/10.1080/0161956X.2013.775868
https://doi.org/10.1016/j.asw.2020.100468
https://www.carnegie.org/publications/writing-next-effective-strategies-to-improve-writing-of-adolescents-in-middle-and-high-schools/
https://eric.ed.gov/?id=EJ999454
https://www.semanticscholar.org/paper/A-review-of-rubric-use-in-higher-education-Reddy-Andrade/1770e05b8f505a9c4c8ad501cccd15b8bccbbc91
Very strong piece. I like the part about instructing vs modeling. That feels very true to me. I am working on something regarding reading skills at the moment. Will have more to share next week when the dust finally settles.
These are great points, Terry. The Comp 1 classroom is especially prone to this, as they often ask students to write 6+ papers during a semester.
To encourage any kind of transformational stage, we need to slow things down and really lean into process-based teaching practices. If students are always under pressure to write, write, write, then we can’t really expect them to alter their approach and think of alternatives to formulaic notions of writing.