It gets old and stale, the constant drumbeat, the self-righteousness of the guardians of the ivory tower. AI cannot write! they bellow. They predict the next word! We must ban it from the academy!
*****
“There exists one pedagogical truth…that has been recognized for decades—maybe even centuries—and is not going to change,” wrote Dr. Sutton on May 1, 2024. “Argumentative writing promotes deep critical thinking as well or better than any other educational activity.”
Dr. Sutton’s blog post explains why Warner University stands against AI. I’m not picking on Warner University. It just happens to represent what I’m seeing on a broader scale.
At first glance, this assertion about argumentative writing looks reasonable enough. On its own, it makes a cause-effect argument difficult to refute. But notice what happens. The door closes on other options—it’s better than anything else.
Before wrapping up, Dr. Sutton ties himself up in logical knots, undoing by his example the very argument he was making. His university is taking a stand, refusing to succumb to the evil bot and the human fools trying to make peace with it.
“Warner University is committed to time-tested educational pedagogy, however, and we will not undercut it. We boast that integrity is one of our core values. If we allow AI to thwart the development of critical-thinking skills among our student body, then we are hypocrites and cheaters against our constituents. We will do the work.”
You would be hard pressed to find a better example of how not to make an argument. We have a thesis: AI should be banned. First premise, an assertion that writing arguments is the best way to teach critical thinking, is unsupported. I’ve seen studies reporting low correlations between independent measures of critical thinking and scores on essays. Writing and critical thinking are arguably separate but integrated processes.
The second premise is unstated. The closest we get to a premise is another unsupported claim (AI thwarts critical thinking). Research in the future is likely to reveal layers of complexity implicating AI in both positive and negative effects depending on conditions.
There is a false equivalence as well: The argument equates "using AI" directly with "thwarting critical thinking" without establishing any logical connection between these concepts or providing evidence for this relationship.
The empty appeal to tradition shows up in many of the lyrical, elegiac manifestos against AI: "time-tested educational pedagogy" is invoked without specifying what pedagogies or explaining why they work. In some ways this appeal irks me the worst.
The moral fallacy is paternalistic and offensive. It turns a pedagogical question into a moral one by invoking "integrity" and "hypocrites and cheaters" without justifying this ethical reframing.
Binary thinking permeates this argument. It creates a false dichotomy between "doing the work" and using AI, ignoring potential nuanced approaches where AI could support rather than replace critical thinking
Missing warrants linking AI use to lost learning and immoral behavior, not to mention hypocrisy, render the argument at least as much slop as the “AI” slop the resistance talks about. The text makes bold claims but never explains HOW AI use would thwart critical thinking development,
Ironically, this statement violates the very principles of argumentative writing it's presumably trying to defend. A stronger version would be to define specific critical thinking skills, present evidence about how these are developed, explain precisely how AI might help or hinder this development, consider counterarguments about potential benefits of AI in education, and support claims with research or concrete examples Instead, we get emotional appeals and unsupported assertions—exactly what we teach students to avoid in argumentative writing.
*****
The resistance to AI in the university reveals more about the resistors than the technology they fear. When educators abandon the very principles of argumentation they claim to cherish—evidence, logic, and balanced analysis—they undermine their own position.
Perhaps a more productive path forward lies not in blanket condemnations or moral panics, but in thoughtful exploration of how AI might enhance critical thinking in certain circumstances.
The question at this point isn't whether to ban AI. This regressive position is the academic equivalent of build that wall. What do we do with it in service of deeper learning? How do we teach students its risks? That conversation requires precisely the kind of nuanced, evidence-based reasoning that seems absent from current academic proclamations.
*****
Academia has traditionally been a gatekeeper of knowledge and assessment. AI disrupts this power dynamic by giving students unprecedented access to writing assistance, problem solving, critical analysis, and information synthesis outside traditional academic channels.
As centers of research and innovation, universities have proven themselves to be creative and student-centered places in recent times when it comes to innovative courses and programs.
In many ways, however, it’s far easier to change curriculum than to change instruction and assessment, twin threats from AI. An appeal to "time-tested" methods reveals a deep institutional resistance to the transformative changes AI seems to require.
What makes AI fundamentally different from previous technological disruptions is that it doesn't just automate tasks. It emulates core intellectual processes that have long been considered uniquely human.
While calculators could solve equations and computers could store information, they weren't writing essays, generating ideas, or engaging in analytical discourse. AI's ability to participate in these quintessentially human domains of thought and expression represents an unprecedented challenge to traditional notions of education, expertise, and intellectual authority.
A potential intellectual collaborator available any time on a screen forces us to reconsider what it means to think, read, write, and learn. The hard work it will take to understand the implications from this fact simply can’t happen inside a war zone. We are more intelligent than the bot, but we’re not behaving like it.
Warner University url: https://warner.edu/why-warner-university-stands-against-student-use-of-ai/
Thanks Terry.
One more: If students are using AI to write assigned papers they should be required to provide 3 annotated references. Art and music are different kinds of creations. Can AI store copyright info and secure it?