A Modest Module Proposal: Reader as Critical Decoder of Synthetic Output
Year-Long Course in AI, Learning, and Writing
Assume for a moment that neither teachers nor professors are interested in integrating LMs into their courses. Indeed, anyone experienced with the redoing the plumbing via curriculum negotiations understands what integration means. It means it's not going to happen.
Schools spent millions of dollars on computer labs, interactive whiteboards, and one-to-one laptop programs with the promise that technology would be "integrated across the curriculum." Teachers were told to weave digital tools into every subject. What actually happened?
Teachers used interactive whiteboards as expensive projectors and laptops became word processors. The integration mandate meant no one owned the technology curriculum. It was everyone's responsibility and therefore no one's. Math teachers weren't trained computer scientists, English teachers weren't tech support specialists, and the promised transformation never materialized because integration assumed teachers could become experts in everything.
Education reformers declared that critical thinking should be integrated into every subject rather than taught as a discrete skill set. Every teacher was supposed to embed Socratic questioning, logical reasoning, and analytical thinking into their existing content. Everyone assumed teachers had a deep grasp of critical thinking pedagogy and could seamlessly weave it into biology lessons, algebra problems, and history discussions.
Instead, most teachers continued teaching as they always had, occasionally asking "What do you think?"
When educators hear "integrate AI into your existing course," they hear "add more work to an already impossible load." Integration implies AI as an add-on, a nice-to-have supplement to "real" learning. This proposed module sidesteps this trap entirely by making AI literacy the primary curriculum, not the afterthought. But it can be taught in an unconventional framework, including AI-tutored experiences.
The advantage of this proposal lies in creating a discrete "AI, Learning, and Writing" course that bypasses the retrofit trap entirely—no more asking teachers to somehow wedge AI literacy into already-packed syllabi. It doesn't demand that the chemistry teacher suddenly become an AI expert or that the English teacher abandon Hamlet for AI exercises. While the dedicated course is a possible solution, some institutions expand AI literacy through tiered course designations such as "Know-AI," "Ethical-AI," etc., professional learning communities, and student portfolio assessment across multiple courses.
This matters because we're past the point where AI literacy can be treated as a cross-curricular responsibility that somehow gets covered by everyone and therefore by no one. Students need systematic training and experience in metacognitive monitoring and decision-making, not scattered exposure across disconnected classes. They need to understand how LM training data biases manifest in language patterns, how to stress-test AI systems to reveal their limitations, and how to develop personal protocols for verification and skepticism.
The functional grammar foundation is pedagogically essential here. Students need to be experts in using words, which means they must understand how language functions. This isn't prescriptive grammar, teaching the King's English; it isn't diagramming sentences.
Most importantly, this approach acknowledges what experienced educators know: meaningful curriculum change happens through replacement, not addition. If AI literacy matters—and it clearly does for students who will spend their lives navigating synthetic content—then it deserves dedicated instructional time, specialized teacher preparation, and serious assessment protocols.
The year-long timeline reflects this reality. Building fluid and alert metacognitive filters requires sustained practice with controlled complexity, not workshop sessions squeezed between existing units. Students need time to develop personal decision-trees for AI interaction, to practice collaborative verification, and to understand their own cognitive biases in bias detection.
This module essentially argues that critical AI literacy is foundational literacy for the 21st century. Just as essential as traditional reading and writing experiences, digital literacy requires new pedagogical approaches. By not counting on the integration charade and imposing this heavy burden on each classroom teacher, schools would be making space for teachers to develop genuine, useful AI expertise in their domain rather than trying to bolt AI awareness onto already strained curricula.
The result is a course that could actually happen, taught by teachers with real preparation time and curricular focus, serving students who desperately need these skills. Integration might be the politically safer proposal, but discrete, dedicated AI literacy education is what students actually need and what experienced teachers can realistically deliver.
This proposal needs a lot of work. It's too abstract, too packed with unfortunately necessary jargon that must be unpacked in face-to-face discussions. I've done my best to get it in a single blog post.
Just quickly, let me say the learning outcome is purposefully limited and specific. I make the argument that what students really need are skills and strategies to distinguish fact from fiction and to recognize subtle and obvious bias when they read bot output. That's it. When students use a bot, they read its output and do what they will with it.
Here's the thing. In order to read output that could be useful to them, they must learn how to talk to the machine. So there is writing, and there is reading. Writing in this situation is human and natural; reading is artificial and unnatural.
We need not teach students to edit nor revise bot output. Let them learn to edit and revise their own writing in their writing courses. In order to achieve this reading ability, however, specifically about LM output, they must learn a great deal about how language really works and about writing—not how we diagram or categorize words and sentences, not five-paragraph essays.
During the module, however, instruction might best be staged as a series of research projects involving writing, composition, and AI. More on this in the proposal proper. As a side benefit, they will improve as human readers and writers from becoming conversant in functional grammar.
My Modest Proposal
I. Learning Outcome: Student will demonstrate the ability to identify potentially fabricated information and systematic biases during live chat with a small RAG machine trained to provide qualitative feedback at a level acceptable to an expert teacher.
Cognitive Substrate: Building and applying a fluid and automatic metacognitive filter to represent predictable tells the machine reveals following the processing of input under digital stress or confusion, including recognition of inherited training data biases and their manifestation in linguistic and conceptual patterns.
II. Knowledge Base:
Functional Grammar and MetaLinguistic Skills: Functional-grammar-based coursework should include work in constituent and functional phrasal juncture and syntactic structures, in word grammar and semantic networks, in the arguments and relations of verbs, in conjunctive clause structures and rhetorical relationships, in disciplinary genres with particular attention to thought verbs, and in social and academic registers.
Systematic Bias Recognition: Beginning with exploration in human face-to-face discussions, students must confront and examine the controversial topic of bias in all its aspects. Collective modular experiences in reading, analyzing, and discussing human-written texts for bias should precede formal instruction in data inheritance patterns, including demographic underrepresentation, temporal bias (recency vs. historical accuracy), source bias (overrepresentation of certain publications, websites, or academic disciplines), and geographic/cultural skewing toward English-speaking, Western perspectives.
Conceptual Blind Spots and Failure Modes: Recognition of common AI knowledge gaps including: lived experience limitations (AI's inability to authentically represent embodied human experiences, especially those of marginalized communities); temporal knowledge cutoffs and their consequences; hallucinated citations and false confidence in non-existent sources; oversimplification of complex cultural, political, or social phenomena; and pattern-matching errors that produce plausible but factually incorrect information.
Bias Manifestation in Language Patterns: Identification of linguistic markers of inherited bias including: default assumptions about race, gender, class, and nationality that appear in "neutral" responses; stereotypical descriptors that reveal training data patterns; confidence modulation that varies systematically based on topic demographics (e.g., more hedging when discussing contributions of underrepresented groups); and the patterns that favor certain argumentative styles or cultural frameworks.
Active Detection and Verification Techniques: Systematic prompt engineering to reveal biases and fabrications through comparative questioning (asking the same question about different demographic groups), stress-testing edge cases, requesting specific examples and noting patterns in what the system can and cannot provide, and developing personal bias detection protocols through iterative questioning strategies.
III. Applied Learning Component: Mentored experience for one academic year in craft-based engagement within structured and intentional research projects designed for self-regulation, collaborative learning, and community building. Keeping AI logs with periodic reflection to prevent exposure to dependency and to sustain rich development of ways to read human text as well as LLM text leads to collaborative sharing of verification practices that acknowledge multiple perspectives in evaluating AI output.
Meta-Awareness of Detection Limitations: Understanding that bias detection is itself subject to the student's own cognitive biases and cultural positioning; developing humility about the limits of bias recognition; and building ethical frameworks for responsible AI interaction that balance skepticism with productive use.
IV. Negotiated Project Work Component
Research Project Requirements: Following AP course design principles, students complete a minimum of four substantive research projects throughout the academic year, with at least two presented as individual presentations and two as collaborative panel discussions. All presentations are videotaped and maintained in a digital portfolio for assessment, peer review, and potential publication in school or district showcases.
A. Core Research Topics in the New Ecology:
Students select four from the following areas of inquiry, each designed to explore the implications of machine language models in contemporary life:
Mental Health and AI Interaction: Examining the psychological effects of prolonged AI conversation, dependency risks, and the therapeutic potential and limitations of AI companionship
Privacy and Data Sovereignty: Investigating personal data usage in training models, consent mechanisms, and strategies for maintaining digital privacy in an AI-integrated world
Anthropomorphizing Machine Intelligence: Analyzing the human tendency to attribute consciousness and emotion to AI systems and its implications for decision-making and relationship formation
Developmental Differences in AI Use: Exploring how different age groups, cognitive styles, and learning differences interact with AI systems, including accessibility considerations
AI and Knowledge Work: Examining the changing landscape of professional expertise, skill valuation, and career preparation in AI-augmented fields
Cultural and Linguistic Bias Amplification: Investigating how AI systems perpetuate or challenge existing power structures across different communities
Academic Integrity in the AI Era: Developing frameworks for ethical AI use in educational settings and defining authentic intellectual work
Student-Initiated Topics: Students may propose alternative research focuses with instructor approval, provided they demonstrate clear connections to critical AI literacy and include substantial opportunities for bias detection and verification practice.
B. Research Process Protocol:
Phase 1 - Human-First Foundation: Students begin each project by documenting their existing knowledge, experiences, and assumptions about their chosen topic through reflective writing and peer discussion/interviews. They generate initial research questions based on personal curiosity and identified knowledge gaps.
Phase 2 - Traditional Source Investigation: Students locate and analyze human-authored sources regarding their topic and their specific research questions including academic research, journalistic investigations, policy documents, and expert interviews. This phase emphasizes primary source evaluation and traditional information literacy skills.
Phase 3 - AI-Assisted Exploration: Students engage with AI systems to explore their topic, applying learned bias detection techniques and verification protocols. They document AI responses, identify fabrications or biases, and compare AI-generated insights with human-authored sources.
Phase 4 - Multimodal Composition: Students create presentations incorporating traditional research findings, AI interaction documentation, and critical analysis of both human and machine-generated content. Formats may include traditional research papers, multimedia presentations, documentary videos, or interactive digital portfolios.
Phase 5 - Reflective Portfolio Documentation: Students maintain detailed process documentation including research logs, bias detection notes, verification strategies employed, and metacognitive reflections on their evolving understanding of both their topic and their own AI literacy development. Near the conclusion of the year, students prepare portfolio exhibitions and invite other classes, teachers, and stakeholders to attend.
C. Assessment Integration: Projects serve dual purposes as content exploration and skill demonstration, with students required to exhibit proficiency in bias detection, source verification, and critical evaluation of synthetic content within their chosen research domains.
Conclusion
This proposal represents a starting point for curricular discussions, not a blueprint. It's deliberately specific in its learning outcomes and concrete in its methods because abstract discussions about AI in education tend to generate more heat than light. When these conversations become overly philosophical or ideologically charged, practical implementation gets derailed.
The framework offered here acknowledges both the urgency of preparing students for a world saturated with synthetic content and the reality of what schools can reasonably accomplish. It's a pragmatic attempt to move beyond the integration fallacy toward dedicated, systematic instruction in critical AI literacy.
What's missing? Where are the blind spots? Why is it worth considering? If unworthy, what alternatives are there?
How might this framework adapt to different institutional contexts or student populations? What approaches might work where this one doesn’t?
Catalyzing the collective wisdom of educators who understand both the possibilities and constraints of AI in real classrooms—that's the point. Your thoughts, critiques, and improvements are not just welcome—they're essential for developing something that might actually work. If you choose not to comment here, by all means, I urge you to talk about the need for action like this with others in your sphere of influence.
I really like this idea, Terry. I'm currently thinking around how we get classroom teachers thinking about discussing career possibilities with students. Your thoughts on AI as structured capability helps position systematic AI capability development as foundational to all the career possibilities. A focuse on AI transforming scientific productivity means students need to understand AI not as a tool, but as a thinking partner in complex problem-solving. Thanks.
An AI proposal that walks the walk! Great work.