Like many AI enthusiasts, I envision AI revolutionizing how we teach and learn. It promises to transform medical education, for example, where students practice diagnoses on thousands of AI-generated cases or surgeons perfect techniques through AI-enhanced simulations.
The integration of learning and doing through AI seems compelling. Students across disciplines might engage in immersive, hands-on learning experiences embedded in project-based, problem-based, or activity-based pedagogy—using generative, exploratory, and analytical AI tools.
However, this optimistic vision may be dangerously naive. The "transformative opportunities" could be mirages, or worse, harbingers of educational disruption. What if, instead of enhancing critical thinking, AI dependency gradually erodes students' ability to reason independently? This fear is real.
Consider how GPS has impacted our natural navigation abilities. I’ve read this argument from AI skeptics. Might AI similarly atrophy core cognitive skills? The sequence of learning and doing that seems so promising could instead create a generation of learners who can neither learn deeply nor do anything effectively without AI assistance.
The excess of caution that some consider a massive educational failure might actually represent appropriate prudence. Rather than viewing such caution as an obstacle to progress, we could perhaps see it as wisdom—recognition that the stakes in education are too high for uncritical enthusiasm.
The real question isn't how quickly we can integrate AI into education, nor how effectively we can ban it, but whether and how we can harness its capabilities while protecting fundamental human development.
***
Right now education has no scientific body to serve as a trusted monitor. Medical innovations must prove both their efficacy and safety before widespread adoption; educational AI integration deserves rigorous evaluation of long-term impacts on learning, thinking, and human development. But who can or will do it?
The future of education may indeed be transformed by AI, but we must ensure it's a transformation we thoughtfully choose rather than one we blindly embrace.
How will the field establish a stable base of reliable knowledge?
***
While many assume AI offers transformative opportunities in education, this assumption demands rigorous investigation through targeted research. Again, the medical education analogy is instructive.
AI-enhanced simulations allow medical students to practice complex procedures; educational AI could transform how students engage with complex learning tasks. However, this potential must be validated through carefully designed school-based studies that meet three essential criteria.
***
First, AI integration must explicitly target higher-order thinking within complex learning activities. Rather than using AI for routine tasks like grammar checking or basic calculations, research should examine how AI tools can scaffold sophisticated cognitive processes in epistemic task environments.
For instance, students might use AI to analyze historical documents, develop scientific hypotheses, or critique literary arguments. These applications would mirror how medical residents use AI simulations to develop clinical reasoning, not just to memorize procedures, but to understand complex decision-making processes.
Second, this research requires theoretically grounded frameworks manifested in detailed syllabi and data collection protocols. Just as medical education carefully structures the progression from basic skills to complex procedures, AI-enhanced learning activities should follow clear theoretical principles about knowledge construction and skill development.
The syllabus should specify not only what AI tools students can access, but how these tools support specific learning objectives within the task environment of the course. Teachers must be empowered to orchestrate this AI ecosystem, guiding students toward meaningful engagement rather than superficial tool use.
Third, and perhaps most crucial, successful AI integration depends on fostering self-regulated learning within well-defined task boundaries in predictable task environments. Students need freedom to explore AI capabilities while remaining focused on course learning goals. This mirrors how medical students might use AI simulations to practice procedures repeatedly, making decisions independently while working within established protocols.
The research should examine how students develop agency in using AI tools while maintaining academic integrity and advancing their understanding. This structured research approach addresses the concerns raised about uncritical AI adoption.
Rather than assuming transformative benefits or fearing catastrophic harm, educators can systematically investigate how AI affects learning processes and outcomes. The excessive caution some criticize can be replaced with evidence-based decision-making about where and how AI genuinely enhances education.
The goal isn't to revolutionize education overnight but to build a reliable knowledge base about effective AI integration. Through local research meeting these three criteria—1. targeting higher-order thinking, 2. anchoring pedagogy in recognized theoretical frameworks of learning, and 3. supporting self-regulated learning, the field can move beyond speculation to understand how AI might thoughtfully enhance rather than undermine authentic learning.
***
This approach allows us to envision and validate AI's role in education, ensuring that transformation, when it comes, serves rather than subverts educational goals. The future of education will indeed involve AI for good or for ill, but it should be shaped by evidence rather than enthusiasm, anger, or fear, and guided by careful research rather than unchecked assumptions.
Either teachers learn higher-order ways to leverage AI for greater learning—or learners learn lower-order AI ways to offload cognitive struggle.
This analysis of AI's role in education will likely resonate most strongly with educators and administrators who already recognize the need for thoughtful, evidence-based innovation.
Those who have experienced both the promise and challenges of technological integration in their classrooms will find familiar ground in the call for systematic research and careful implementation. The proposed framework—emphasizing higher-order thinking, theoretical grounding, and self-regulated learning—offers a concrete path forward that neither dismisses AI's potential, turns a blind eye, nor surrenders to random enthusiasm.
However, this measured approach is unlikely to satisfy those at either extreme of the debate. AI enthusiasts who envision rapid, transformative change may view the proposed research framework as unnecessarily cautious and bureaucratic, potentially stifling innovation.
Similarly, staunch AI skeptics who see any technological integration as a threat to traditional educational values may dismiss the framework as just slowing, rather than preventing, what they view as educational deterioration. Substack has its fair share of devoted skeptics and doomsayers who have developed a following. It is fertile ground for fiery oratory.
Neither extreme group is likely to be swayed by calls for systematic research and gradual implementation.
The most significant impact of this modest analysis may be on those in positions to shape local institutional policies—department chairs, curriculum developers, and educational policymakers. I know I have some readers in this group. Message me to talk about possible designs for doable local research projects. I have a million ideas from very simple to massive research activities.
For these stakeholders, the parallel with medical education's careful adoption of new technologies, combined with specific criteria for research and implementation, should be compelling. These criteria provide a practical model for managing AI integration while maintaining educational integrity. I encourage these readers to explore uses of AI in medicine. Local, modest research with important insights to share can be carried out in just one classroom.
As education grapples with AI's inexorable advance, this common ground framework offers a way to move beyond both uncritical enthusiasm and reflexive resistance toward evidence-based innovation that serves, rather than subverts, fundamental educational goals.
As a Dutch teacher with a myriad of questions about responsible use of GAI in the classroom, experiencing near total apathy on the subject among both colleagues and management, this seems like the only logical way forward. We need a group of forward-thinking, principled, scientifically grounded and hands-on educators to explore it. That’s a lot to ask for a sector that is already seriously over-taxed. But this circumstance doesn’t make these steps less necessary. And it needs co-operation instead of ideological battles on the bigger picture of AI and society/ecology. That’s a big topic in itself, no less important and related but it shouldn’t drain energy from the necessary educational explorations.