The swift institutional response to ChatGPT's release in late 2022 revealed a familiar pattern in educational reform: the impulse to preserve existing practices through prohibition rather than engaging thoughtfully with disruptive change. As major school districts moved to block AI tools from their networks, they echoed historical instances of professional resistance to technological innovation from medicine's initial rejection of telemedicine to journalism's resistance to digital formats.
This reflexive institutional resistance deserves careful examination. Seymour Sarason's (1990) analysis of educational reform's cyclical failures provides a useful framework for understanding current responses to AI. Just as previous reform efforts often faltered by failing to address fundamental power relationships within educational institutions, current approaches to AI integration risk similar limitations by focusing primarily on threat prevention rather than systematic adaptation.
The parallels with other professions' experiences of technological disruption are instructive. The medical profession's evolution from resistance to integration of AI as a diagnostic tool, and the legal field's gradual embrace of AI-assisted research tools, demonstrate how initial professional resistance often reflects deeper concerns about maintaining professional identity rather than purely practical considerations. These fields ultimately found ways to preserve core professional values while adapting service delivery methods to enhance access and effectiveness.
Education now stands at a similar crossroads. The rapid adoption of AI tools by students, particularly for writing tasks, reveals limitations in current pedagogical practices that demand attention. Rather than viewing AI primarily as a threat to academic integrity, we must ask more fundamental questions about our instructional methods and assessment practices. The fact that students so readily embrace AI-generated writing should prompt serious reflection about the nature and purpose of writing instruction in our educational institutions.
Moving forward requires breaking down the false dichotomy between complete preservation of traditional methods and wholesale transformation. Instead, education must develop hybrid approaches that thoughtfully integrate AI while maintaining pedagogical integrity. This demands creation of diverse professional learning communities that bring together educators, students, technologists, and researchers to develop context-specific implementation strategies.
The path forward lies not in resolving but in productively managing the tension between institutional stability and technological innovation. Education's core mission—fostering learning and development for all students—remains constant, but our methods must evolve. As we navigate this transition, we would do well to remember that successful professional adaptation historically comes not through rigid resistance or uncritical embrace, but through thoughtful integration that enhances rather than diminishes professional expertise.
The academic community has a crucial role to play in this process. We must move beyond institutional resistance to actively developing frameworks for meaningful AI integration in educational settings. This approach requires both empirical research into effective implementation strategies and theoretical work that helps us understand how to preserve educational values while embracing technological innovation. The question is not whether education will adapt to AI, but how we can shape that adaptation to enhance rather than diminish our educational mission.
These are extremely strained metaphors. Digital media did not take away that which helps a writer develop their craft (writing), and telemedicine (which is most assuredly inferior to direct) removed a lot of information from the interaction, but also did not remove the act of doctoring that makes one a better doctor.
AI, however is like a modern dashboard of an airplane. It so abstracts the concept of flying that the pilot is demonstrably worse at piloting.
Humans need to actually do a thing to master it, and AI is not a different channel through which one “does,” it is a substitute do-er. A scholar that does not write or read primary sources will be a worse scholar. The very act of their craft is being removed, making them unable to improve.
There is likely no stopping AI, but it is already destroying our capacity to think, and may render us inert as beings with agency.