Today I’m reporting on a semi-quasi-scientific study which has not been peer reviewed. My advice is to view it as speculation grounded in a small sample of real life evidence, an autoethnography of sorts. Two conversations from several weeks ago serve as the data for this analysis, neither of which began nor ended as part of a planned study. One interlocutor, Mr. Mason, a high school teacher, represents a portion of active educators who believe that the institution has an obligation to provide instruction in appropriate uses of AI to students. I didn’t record Mr. Mason’s talk, but instead took notes when I noticed the rich material I was hearing. The second, Mr. Dixon, a university professor, I did record with his permission. He represents those who believe the institution is obligated to enforce a ban of AI for academic work. I was interested in locating the genesis of these opposing passionate commitments. My method consisted of the following:
Having free flowing conversations with two teachers, Mr. Mason and Mr. Dixon (pseudonyms);
Using Voice Memo to transcribe Mr. Dixon’s conversation; relying on contemporaneous notes from Mr. Mason’s conversation;
Cleaning the data through manually listening and editing Dixon’s printed transcript to conform to the audio data;
Using the method of open coding involving in vivo codes;
Writing a command prompt for Claude to analyze the transcripts and notes to validate my in vivo codes; .
Identifying the character of the passions between camps on the question of LLM instructional use.
The Critical Role of Empathy in Education
Mr. Mason and Mr. Dixon spoke with me on the topic of instruction and AI in early March. I selected these names for them to echo features of the professional collaboration between these famous astronomers, whom fate delegated to mark the line between what became the free and the slave states in America.
The historical Mason and Dixon’s relationship was marked by incessant arguments. Like innovators of all times, Mason embraced new technology and believed in his telescope observations. Like defenders of all time, Dixon insisted they needed the transit, a precision measuring instrument, and a surveying chain.
My Mr. Mason embraces AI technologies and trusts students to navigate them with guidance, believing direct engagement with these new tools is essential for learning. Mr. Dixon insists on traditional educational approaches and tools like unaided writing, believing these primal methods are irreplaceable for genuine learning to endure. He sees a threat in new tools.
Both scenarios reflect a pattern. When we are faced with technological change, some embrace new approaches that offer different perspectives. Others maintain that traditional tools and methods remain essential for "proper" work. This tension resonates across centuries, whether drawing boundaries across states or navigating the boundaries between human and artificial intelligence in education.
In my mind, the analogy captures the essence of our situation perfectly. How they navigate this intersection will impact their students' learning experiences and readiness now and potentially going forward. At the center of the debate, I discern in the data, lies deep passion and commitment on both sides of the issue.
Whether Mr. Mason or Mr. Dixon uses LLMs pedagogically or not will matter far less than their willingness to talk with students about their AI experiences from an empathic stance. The willingness of teachers to listen, to understand, to acknowledge, and to respond to their students' challenging experiences with AI may be the single most important factor during this truly exhausting period. Silence in this instance is not golden.
Empathy Mason and Dixon Style
The conversations with Mr. Mason and Mr. Dixon reveal a striking contrast in how they approach AI. This difference isn't primarily about technical knowledge, interestingly. It’s not about teaching philosophy; each teacher is highly expert in the subject area and believes students need to retain information from the classroom work. Specifically, the difference is about the ability to notice and respond to students' emotional experiences with AI.
Mr. Mason: Empathy as Educational Foundation
Mr. Mason exemplifies what might be called empathetic technological integration. His approach begins by recognizing the emotional reality students face: "Our school is one of these $50,000 a year plus places. The kids are all on track to go somewhere, you know, the golden ticket. They're terrified of getting caught for academic integrity purposes.”
Mason has some serious technology chops. As a historian, he is deeply interested in teaching his students to read across texts in some depth and has engineered AI spaces for students to access pdf files for cross-textual information.
His empathy extends beyond technological education to noticing his students to active listening and providing emotional support. He shared the story of a student who experienced profound distress after using AI, which In tried my best to capture verbatim. This version isn’t exact but it is substantively accurate, and it squares with other such incidents I’ve heard or read about:
"She pasted it in, and it filled up the essay and finished it, and she just about cried because it was so good. It was far better than anything that she could do, and it just, I mean, she was—it was like an emotional catastrophe for her because it belittled her writing... And fortunately, she had a teacher she could talk to about it. But these kids that don't... they're being tormented, essentially."
Mr. Mason's empathy leads him to create safe spaces where students can discuss their AI experiences without fear of judgment or punishment. This approach recognizes that students are navigating complex psychological territory. Many are feeling simultaneously drawn to AI's capabilities while drowning in doubt and guilt, inadequacy, or uncertainty about its appropriate use.
Mr. Dixon: Prioritizing Traditional Educational Values
In contrast, Mr. Dixon's perspective focuses more on preserving educational traditions and learning processes he believes are anchored in centuries of scholasticism. As a scholar of classical antiquity who deals with texts in the original Latin and Greek, his very being is devoted to his discipline. He speaks and reads the languages of Plato and of Nietzsche.
The thought that a student would disobey a requirement and upload a paper to be summarized by AI is appalling:
"As a professor, I want her to spend the eight hours to figure it out on her own. I don't want her to have a machine do it for her, because I think it's good for her mind."
Mr. Dixon has always devoted himself to his students and is highly appreciated as a professor for his probing discussions and whimsical, reflective demeanor.
His core concern connects to his perception of the threat AI poses to humanity. As he describes an encounter of the AI kind, he relates how lucky he feels to have learned math before the introduction of calculators (his undergraduate degree is in mathematics). After the narrative climax of his tale of a student “caught red handed,” after the unmasking, Mr. Dixon releases his frustration:
"He just didn't seem to understand that, no, it's not okay that you had a machine write it for you. You know, how do you keep teaching writing the way I learned, say, and you learned your multiplication tables before there were calculators?"
Having worked for years in a tight-knit, small academic community with annual conferences and having co-authored papers, Mr. Dixon has mentored countless undergraduates and enhanced inner lives of adults through his writing, his speaking, and his collaboration. He wants to see a path to good uses of AI, but his policy has been and remains “AI not welcome here.”
Consequences of Emotional Intelligence
Cultural empathy in the surround creates different educational environments for students:
In empathy-rich environments: Students feel safe to explore AI's possibilities, discuss their uncertainties, and develop understanding of appropriate use. They develop agency rather than anxiety. From interviews I’ve done with students, those who have had ‘mentored-experience-by-happenstance’ have developed a healthier appreciation for longterm braincare and for the ink pen.
In empathy-poor environments: Students hide their AI use, experience guilt when using tools that are everywhere in their environments, and miss opportunities for guided learning about appropriate AI integration.
Bridging the Empathy Gap: Practical Approaches
To address this need, educators might consider:
Active listening: Creating regular opportunities for students to share their AI experiences without fear of judgment.
Acknowledging complexity: Recognizing that students face genuine dilemmas about AI use rather than simple "cheating" decisions.
Guided exploration: Designing structured opportunities for students to experiment with AI under guidance rather than prohibition.
Emotional validation: Acknowledging students' feelings of inadequacy, confusion, or uncertainty when confronting AI's capabilities.
The immediate need, in my view, is not technical understanding of AI tools, but emotional intelligence about how students experience these tools. By cultivating empathy alongside technical knowledge, teachers can guide students toward a healthier, more productive relationship with AI technologies.
Traditional concepts of academic integrity centered around individual, unaided production of work are being challenged by AI tools. Mr. Mason describes students "terrified of getting caught" and teachers "looking at the keystrokes" to police AI use, illustrating how existing academic integrity frameworks need better understandings of what constitutes legitimate AI assistance versus inappropriate delegation. There's growing tension in how educators respond to AI in student work. While some teachers are "fed-up" and fighting "fire with fire" by using AI to grade students' AI-generated work, others embrace it to customize learning.
Mason’s "Command Stance" vs. Passive Use: Pedagogical Approaches to AI Literacy
The concept of teaching students to take a "command stance" toward AI rather than passively following its lead emerged as a significant theme in Mr. Mason's observations. This distinction between commanding AI as a tool versus being led by it represents a crucial pedagogical approach.
Students who develop this command stance stand a better chance of maintaining agency and critical thinking in their AI interactions. They understand when and how to use AI appropriately, maintaining control over the final product rather than uncritically accepting AI outputs.
Developing pedagogical approaches that cultivates this command stance—helping students understand when to use AI, how to prompt effectively, and how to critically evaluate AI outputs—represents a crucial educational frontier.
Professional Development Challenges for Faculty in the AI Era
Both conversations highlight significant challenges in faculty professional development regarding AI. As Mr. Mason noted, he's working with "a very conservative experienced faculty who doesn't really feel like they need to change." In some ways, Mason is experiencing the adult version of his students’ experience.
Faculty face both technical challenges (learning how AI works and how to use it effectively) and deeper conceptual challenges (reconsidering fundamental assumptions about teaching, learning, and assessment). Traditional professional development approaches focusing solely on technical training are likely insufficient to address these deeper concerns.
Effective faculty development must address not just technical skills but provide spaces for teachers to work through their emotional and philosophical responses to AI—moving from fear or dismissal toward thoughtful integration.
The Need for Structured Dialogue About AI in Education
Rather than rushing to implement AI policies or focusing exclusively on technical training, educational institutions would benefit from creating structured, arbitrated settings where teachers can openly discuss their emotional responses to AI. These dialogues should prioritize understanding one another's perspectives before moving toward action.
Unfortunately, administration often appears absent in facilitating these crucial conversations. As both transcripts suggest, the burden falls largely on individual teachers like Mr. Mason to create spaces for dialogue with institutional support against him. This administrative absence leaves teachers without the structured support they need to develop empathetic, effective approaches to AI.
The path forward requires a recognition that AI in education is not a technical challenge. The challenge is an emotional and relational one. By prioritizing empathy and creating spaces for educators to process their feelings about AI, institutions can develop approaches that serve students rather than ignoring or suppressing their experiences living in the world.
Only through these empathetic dialogues can educational institutions develop AI policies and practices that balance traditional educational values with the realities of the technological world students are entering. The future of AI in education depends not primarily on technological sophistication, but on our capacity for empathy and understanding across generational and philosophical divides.
Reading List
Banissy, M., & Edgar, J. (2019). Jobs of the future: Teaching empathy to artificial intelligence. Microsoft Europe News. https://news.microsoft.com/europe/features/more-than-a-feeling-teaching-empathy-to-artificial-intelligence/
Education 2.0 Conference. (2023). How can empathy enhance teaching and learning? Education 2.0 Conference Blog. https://www.education2conf.com/blog/how-can-empathy-enhance-teaching-and-learning
Reniers, R. L., Corcoran, R., Drake, R., Shryane, N. M., & Völlm, B. A. (2011). The QCAE: A questionnaire of cognitive and affective empathy. Journal of Personality Assessment, 93(1), 84-95.
Xu, Z., Cheng, X., Wang, F., & Lin, J. (2023). Associations of empathy with teacher–student interactions: A potential ternary model. International Journal of Environmental Research and Public Health, 20(10). https://pmc.ncbi.nlm.nih.gov/articles/PMC10216484/
Zhu, L. (2024). Teachers using AI to grade their students' work sends a clear message: They don't matter, and will soon be obsolete. Futurism. https://futurism.com/teachers-ai-grade-students
We also need school leaders who practice empathy towards teachers who want to experiment with AI in their classrooms. There was a time when I was so excited to experiment with using AI to provide feedback on student writing assignments, only to be punished and looked upon poorly by leadership and even parents. I built a custom-made system where I could take photos of student handwritten work, quiickly scan it, and have AI process it automatically. I thought this was an aweseome idea because I was combining the best of both worlds (traditional hand written work and AI to handle the huge workload of providing feedback almost daily for 100 students. But what parents and leadership wanted to see was a simple hand written check mark and sketch of a few words to make it look like I was giving a personalized touch, when in reality it was not possible to deeply analyze so many homework assignments.
Funny thing is, I come out of the classical education tradition myself with degrees in philosophy, Classics, and History, and a focus on medieval and early modern Europe. I’ve spent most of my career in highly intellectual school environments. And yet, I find myself firmly in the Mr. Mason camp. Maybe that’s because true classical education, at its best, isn’t about clinging to old tools, but rather cultivating judgment, empathy, and adaptability. In that sense, helping students navigate AI with nuance feels more aligned with the humanist tradition than rejecting it outright.