The Artificial Intelligence Test: To Do or Not to Do
The tenor of the stuff I’m reading suggests we’re heading for another year of AI paralysis and mayhem. I just read a Substack piece from a highly skilled writer with tens of thousands of subscribers who paints LLMs as pure evil—I mean, Satanic, fire and brimstone, horns, a pitch fork, the whole nine yards—and rechristens the bot “Hal.” Would have been a more effective branding back in 2023, but oh, well. The post has, like, 25,000 hearts and 150 restacks.
It’s unnerving to witness this level of raw, unthinking emotion run amok among highly eloquent people. One thing for sure: This homer post wasn’t written by a bot. Either that, or this writer can outbot me any day. I sometimes don’t scrub my posts with a brillo pad. I guess I figure I have as much right to words as a machine. Just because a machine uses some words doesn’t mean they’re removed from my lexicon.
Did you read Matt Brady’s piece on cheating? Jonathan Swift, roll over. Matt’s piece put me in mind of A Modest Proposal. Ok, he writes. If the powers that be want to deny access to instruction in using the linguistic calculator for all school kids, if we prosecute them for cheating for using a machine to do things adults don’t like, well, Matt is going to teach his kids to cheat like high-class gamblers in Las Vegas. Why should his kids have to compete with a generation of peers who are saying screw it, I’m not doing your busy work any more. Aristotle told us: Anything a human can do naturally, they can do it better with instruction.
As the 2025-26 school year approaches, most schools face the same choice they avoided last year: a) ban AI tools entirely and pretend they don’t exist or b) figure out how to begin to teach learners to use them, because no doubt they have some pretty impressive uses. Public intellectuals whom one might expect to have wisdom on the matter add support to choice A, the ban. “I don’t think it [ChatGPT] has anything to do with education, except undermining it. ChatGPT is basically high-tech plagiarism…and a way of avoiding learning,” said Noam Chomsky (EduKitchen & January21, 2023).
I’m hearing through the grapevine that school administrators are struggling mightily to draft policies that satisfy their teachers, their parents, their business communities, and their students. Historically, school administrators are loathe to touch third rails like parents. I predict no clear policy unless the policy is to forbid any use of AI at all in academic work. The only benefit of this policy I can see is clarity.
How many teachers are going to take Matt’s suggestion seriously? Another year of bouncing the ball at the half court line could be enough push to do it. Rigid bans and tribal panic about AI in schools risk deepening the very inequities education is supposed to fight. Thoughtful integration, rooted in cultural relevance, must be our aim.
The Stakes: What Happens When Schools Postpone the Inevitable
In a future where schools take a restrictive stance on AI in 2025-26, This anxiety has concrete costs; consider, for example, classrooms would operate under comprehensive AI bans with districts like Fairfax County restricting access to ChatGPT and other AI tools, citing concerns about "academic integrity and child safety." Some teachers would enforce strict penalties, with educators like high school English teacher Chanea Bond declaring that students who use AI will "receive a zero on assignments" with "no exceptions."
Traditional assessment methods would dominate. Teachers would require handwritten assignments to ensure students develop skills like critical thinking and writing without AI assistance despite the fact that handwriting has no logical link to thinking any more than finger spelling. For good measure, it could happen that Google’s Disneyland of digital doodads falls apart for lack of visitors, which could be a good thing.
Many educators would share philosophy professor James Taylor's concern that with AI, "the student has now outsourced all of their thinking to a machine" and "haven't worked out and developed the skills themselves." Just what all of the kids I’ve been talking to lately say they want to do, outsource their thinking to a machine. They have no interest in developing their brain, right? They aren’t curious about anything much. Don’t worry about their having legitimate needs and interests re: AI. Keep the cookie jar on top of the refrigerator.
School boards would be divided over AI's place in education with some members expressing "fundamental issues" with AI's tendency to generate false information, worried about "alternative facts entering into the education of our children." Oh? Have the turned on the news lately? Teacher adoption would remain low, with surveys showing "two of every three educators" not using AI tools in their classrooms, citing lack of time, knowledge, and clear policies. The resulting environment would create a divide between schools embracing technology and those attempting to maintain traditional educational approaches. Who cares if the divide is along class and racial lines?
A scenario could emerge where "wealthier schools hold an advantage over underfunded ones solely because they can afford to invest in the best AI technologies," potentially widening academic achievement gaps. This digital divide is already apparent. In New York City alone, "over 1 in 3 black and Hispanic households and 45% of low-income households lack access to high-speed internet.”
Workforce Readiness Gap
By 2027, there will be a significant workforce disruption due to AI with "69 million new jobs created and 83 million displaced," with AI skills becoming requirements for employment. What happened to college and career ready? Students from schools that restricted AI will "risk falling further behind in the job market," lacking the skills to leverage AI for entrepreneurial opportunities or career advancement. This scenario will create a generation of workers entering adulthood like deers in headlights to an AI-integrated workplace.
The "Third Digital Divide"
A new type of digital divide will emerge beyond just access to technology: "the rich have access to both technology and people to help them use it while the poor have access to technology only" if they even have that. This is where Matt Brady’s genius piece comes in. Rather than AI augmenting learning, in restrictive environments it could become a replacement where "instead of AI helping to learn, supporting student's productive struggle as they do the hard work of gaining and applying knowledge, it'll become a crutch." That is, unless their teachers teach them how to cheat like most of their peers seem to be doing if we believe the fake news.
Socioeconomic Impacts
Research has shown that "digital exclusion has been shown to be associated with a lower quality of life, lower educational outcomes, and even reduced physical and mental health."
“The research findings suggest that digital inclusion should be viewed as a wider determinant of health. Many of the identified consequences of exclusion are particularly important in relation to mental health and mental health recovery. This research suggests that identifying and addressing digital exclusion should be viewed as a priority for mental health services.”
The economic impact would be enormous: "The racial wealth gap will cost the US economy $1-1.5 trillion between 2019 and 2028," and AI could either mitigate or worsen these disparities, depending on access.
This bifurcated approach to AI in education risks creating a society where technological fluency correlates even more strongly with socioeconomic status, reinforcing existing inequalities and creating new barriers to social mobility. Students from AI-restricted environments may develop resentment toward these policies as they encounter peers with advanced AI skills, potentially leading to a backlash against the educational institutions that limited their technological development.
The Voices Driving the Exclusionary Side of the Debate
The debate over AI in schools has been largely dominated by adults. A student whom I interviewed a few months ago told me how silly he found it that he knew far more about the dangers of AI than his teachers. The voices of students—who have the most to gain and lose in this technological shift—have been absent. When we do hear their voices, it’s treated like a novelty act, a glimpse behind the curtain where teachers work every day.
Teachers practicing in the Humanities offer critiques of educational technology framed in apocalyptic terms—Satan and his minions flapping their wings, showing Vampire teeth. They are so beaten down by having to defend their existence the reaction was swift and apparently permanent. Years of standardization, testing pressure, and technological disruption have left many Humanitarians feeling that AI is just another assault on axiological sensibilities. When AI is framed as an existential threat to human creativity and imagination, the only acceptable response becomes total resistance. This makes it nearly impossible to have conversations with the people who may be our collective conscience about arguably one of the most powerful educational disruptions in history.
The Silent Majority: Teachers Stuck in the Middle
Most writing teachers aren’t ideological warriors. They want their kids to learn to write. They are thrilled when they see student progress. They are pragmatists trying to help students succeed. But they’re caught between administrative pressure to “address AI” when they know their administrators would rather see it go away, student needs that traditional approaches are not meeting and have not met for decades, and professional uncertainty about what good practice looks like.
What Thoughtful Integration Might Look Like
For Teachers: Let’s be honest. Thoughtful integration is going to take a while. Current professional development often replicates ‘sage on the stage’ delivery, where information is transmitted rather than actively constructed. Sessions for teachers aren’t engaging and relevant. Usually one day, one shot, take what you can affairs. This method, like outdated classroom models, is frequently rightly seen as irrelevant or ineffective by teachers, failing to enhance classroom practice, indeed wasting their precious time. They need to understand how they can use the linguistic calculator—how meaning emerges not through friendly conversations with models but through intentional, well-thought-out applications of a tool during leaning activity. .
For Students: Students have the most to gain and to lose. We leave them sitting in their corner, waiting for us to dhow them something they can do with the bot besides cheat. Schools need an AI-embedded curriculum that teaches students to plan and refine their AI interactions thoughtfully. This means developing the logical reasoning and metacognitive awareness needed for curating bot output, documenting their processes, and critically evaluating AI-generated content for accuracy, bias, and appropriateness.
Administrative Role
Uniform Academic Integrity: Schools need a single, comprehensive academic integrity policy that addresses plagiarism broadly, not separate rules for AI versus traditional forms of academic dishonesty. Dishonesty is dishonesty. A policy assigning AI as the source of plagiarism ignores reality. AI simply makes dishonesty more easily accomplished. This policy should thoughtfully define what constitutes original work, not fraudulent work. If we know how original work is defined by its inherent qualities, not by its tone or its style, we better understand what fraudulent work is, and we can find circumstances wherein AI is appropriately used. Conceding officially that all uses of AI are dishonest or each teacher is free to decide is not the same as the beginning of a rational dialogue.
Teacher-Led Implementation: Within this framework, disciplinary and grade-level teacher teams should develop specific AI guidelines appropriate to their contexts and disciplines. Teachers really are amazing human beings, often forbidden from doing good work by the people who control them. Responsive teaching? Pshaw.
Teachers spend their evenings thinking about how to make this or that assignment come alive. If a writing task asks students to deliberately cheat to produce a position paper on AI for a class discussion rather than for publication, for example, teachers and students alike find out a bit about the reality of using AI assistance, no holds barred, as long as students are truthful about what they did and what they learned. Risky business, eh, asking students to cheat? Have any of you tried to create a fraudulent paper with AI that could pass as legit? It’s tough. So remove the guard rails. Let student see what cheating is right then and there in the classroom.
Infrastructure Support: Administration’s primary responsibility is providing the forum for community discussion and resources for teacher groups to collaborate on AI curriculum development, ensuring they have time, access to knowledgeable experts, and authority and resources to create thoughtful, discipline-specific approaches. On some level, this has always been its primary responsibility. In the real world administration calls the shot.
Moving Forward: Communication Over Camps
There is no perfect solution to AI in education, but there are thoughtful approaches that serve students better than paralysis or prohibition—or scare tactics like casting AI as Satan on Earth. The choice isn't between human and artificial intelligence. It's between thoughtful integration and chaotic avoidance. When viewed through Gloria Ladson-Billings' (1995) culturally relevant pedagogy framework, the path forward becomes clearer. In condensed form, the following assertions encompass the spirit of her framework:
Academic success demands rigorous expectations where all students can and will achieve. Expecting them to cheat dooms the enterprise from the start. Begin with the expectation that they CAN and WILL learn in this classroom. They WANT to learn.
Cultural Competence honors students' identities while building bridges to wider worlds. Honoring students’ identities is a big step. It has to be a conscious mission, not a slogan. Give students many opportunities to see cultural competence in action on campus by making it a part of the grading system if you must have grades. AI can help here if thoughtfully used. Instead of AI polices written and pinned to the wall in every classroom, put up a cultural competence policy.
Critical Consciousness empowers learners to question, challenge, and transform societal inequities. Aim not so much for critical thinking, the watered down version of critical consciousness. Bots can simulate critical thinking. But no way no how not in this reality can a bot demonstrate critical consciousness. Again, if you must grade, grade this.
To achieve the first objective, as I have already said but cannot emphasize enough, administrators must develop unified academic integrity policies that address plagiarism holistically rather than creating separate rules for AI versus traditional forms of academic dishonesty. These policies should thoughtfully define what constitutes original work based on its inherent quality, creating safe zones for experimental AI use in rhetorical classroom. Educators acknowledge that it is often more useful to know what a thing is rather than what it is not.
Just as Ladson-Billings emphasized high academic expectations for all students, these policies must maintain rigorous standards without watering down instruction. Within this framework, disciplinary and grade-level teacher teams should develop specific AI guidelines appropriate to their contexts, recognizing that disciplinary cultural competence requires understanding how different subject areas interact with technological tools.
Teachers require professional development that goes beyond simple detection training. They deserve more than being yapped at by a speaking head with a slide show. This professional learning should help educators understand through activity how language works in AI interactions, how limited AI’s rhetorical patterns are, and how AI's legitimate strengths can help develop students' sociopolitical consciousness in order to participate fully in a capitalist society. Teachers should collaboratively experiment with AI tools to get clear about their capabilities and limitations, collaborating with students as well to develop integration strategies that emphasize human judgment, creativity, and critical consciousness—the very skills that culturally relevant, responsive pedagogy values.
Parents play a crucial role in this ecosystem by asking specific questions about their school's AI implementation approach. Wide-spectrum bans are not in their children’s best interests for reasons already discussed. Their demands for teacher development and technology access initiatives can help ensure that all students, regardless of background, have equitable opportunities.
Parents can help their children think about AI-generated content, reinforcing the sociopolitical consciousness that Ladson-Billings identified as essential to meaningful education. Through constructive engagement in policy discussions, parents contribute to creating learning environments that honor students' cultural knowledge while preparing them for complex futures.
Schools that embrace this complexity and invest time and money in comprehensive training and focus on building students' cognitive and communication skills will thrive. Those that remain paralyzed by ideological debates or administrative inertia will find themselves increasingly irrelevant to the students they claim to serve. The question isn't whether AI will transform education. Anyone paying attention understands that a beach may be a warm spot to stick your head in the sand but it’s just temporary comfort. Plus it’s filled with creepy crawlers. The question is whether schools will lead this transformation through a culturally relevant lens that ensures all students develop academic excellence, cultural competence, and critical consciousness in relation to these powerful tools.
This fall presents a rare opportunity for schools to move beyond false binaries and intellectual control to focus on what students actually need: the ability to think for themselves, uncoerced, to communicate effectively using their own words, and to use all available tools—human and technological—to understand and transform their worlds. Just as Ladson-Billings offered a thought tool to transform educational thinking in 1995 by asking big questions about student success, schools can transform themselves by asking not how to restrict a technology that has exposed schooling’s core weaknesses, but how to harness it for more equitable, engaging, and empowering education.
Let’s not mistake policies for wisdom or bans for courage. Instead, let’s build something better for the humans we serve, with all their uneven, earnest, and stubbornly creative voices—machine-made or otherwise. If all the world’s a stage and all the men and women merely players, let’s not keep our learners restricted to the audience.


Terry - you don't link to the screed. I'd be curious to read it. What do you make of John Warner's critique of the necessity of AI in schools? What do you find to be the most compelling arguments? I have been saying for months that I think this will lead to more inequality along racial, economic and demographic lines - well-resourced schools equipped to proceed with AI cautiously will be better positioned than those that continue to throw up hands and pretend it doesn't exist. I know districts and schools that have simply blocked every possible platform available in their school - for teachers and students alike. Does not seem like a long term solution but sure makes it easier during the day. At home ... different story. It will be important to track what happens. I fear, as usual, earlier experiments to integrate that backfire will be immediately pointed to as evidence of the folly of bringing it into classrooms. Schools that roll out thoughtfully, one step forward, two steps back, are much harder to cover. Colleges seem more adverse than K-12 though it's impossible to draw definitive conclusions. I remind myself that Substack is not the real world - most people are not following this debate that closely. Students priorities are different than adults though many are scared of a future of AI. Did you read the joint William Liang piece with Liz Shulman? He paints perhaps the most cynical view of student attitudes towards AI but if it's even half accurate of the majority, the cheating will get worse if teachers don't adapt. https://theimportantwork.substack.com/p/ai-is-creating-a-divide-between-teachers