Key Takeaways
• Academic assignments function as complex institutional systems that translate educational mission into measurable outcomes, inheriting organizational frameworks from district to federal levels.
• Language machines have created two fundamental paradoxes for teachers: the Assessment Validity Paradox (inability to distinguish student from AI work) and the Institutional Compliance Paradox (conflicting demands to prohibit AI while preparing students for AI-integrated futures).
• Eight institutional paths exist for resolving these paradoxes, ranging from process documentation and AI transparency protocols to community-responsive policy development and graduated AI integration curricula.
• Individual teachers currently shoulder the burden of resolving system-level contradictions that require institutional solutions, creating unsustainable pressures on front-line educators.
• Floridi's Levels of Abstraction method of analysis reveals that AI has disrupted educational information processing at multiple nested levels simultaneously, requiring systematic reconstruction rather than classroom-level fixes.
Introduction
My work with Nick Potkalitsky over the past few years has deepened my appreciation for the complexity of the work ahead in terms of redesigning assignments. The presence of language machines in the literacy classroom redistributes agency and interpersonal relationships among stakeholders at all levels in a way I've never seen before.
In the early 1990s, colleagues and I working in a middle school English Department designed and implemented a portfolio assessment system for one school year specifically to raise the significance of process and to limit the influence of the work product in the day-to-day thinking of students in the classroom ecology. Looking back, I now understand what we were calling "ownership" is better thought of as "agency." Agency is dispensed on a continuum; ownership is a binary.
Nick and I learned just how complex, emotional, nuanced, and personal the work of scaffolding students to assume more agency in their literacy work processes is—and how necessary it is now that AI is a reality. Classes are supposed to be about getting assignments and completing assignments. Everybody knows that. When we disrupted this assumption, we discovered questions neither of us had thought of to ask before, and we got even more input from students.
This kind of integration work can be done and should be done regardless of individual resistance. Coercion is never the answer, but there is a strong correlation between AI resistance and misinformation/misconceptions about language machines. For this reason considerable compassionate and empathetic professional development is essential.
In our case, we collaborated with the administration, student's parents, students, one another, and Nick's colleagues who agreed to let his students interview them for a collaborative research project on AI and instruction. If I had had the clarity I have now using Floridi's Levels of Abstraction method then, I would have been better positioned to contribute ideas and to interpret what I was seeing. I offer this post as a conversation starter, not as an exemplar of anything.
The Academic Assignment as Institutional System
The academic assignment operates as a core institutional mechanism designed to fulfill K-12 education's obligation to a democratic society: the reliable development of knowledge, skills, and sociopolitical competencies as mandated by state standards and community expectations. The assignment functions as a formalized system that translates institutional mission into measurable learning outcomes.
The assignment system inherits the K-12 institution's organizational framework, including district policies, state curriculum standards, grade-level benchmarks, and federal accountability requirements. A seventh-grade social studies assignment on the Constitution carries forward the district's civic education mandate, the middle school's approach to abstract reasoning, and the state's specific learning standards for that grade band. The assignment serves as an institutional proxy, embedding broader educational philosophy into concrete, age-appropriate tasks.
Characterizing the assignment as a system-level abstraction within the broader infosphere makes visible how information processing occurs across concentric institutional layers in compulsory education. The assignment system is a bounded informational process with defined inputs (state standards, student developmental stages, community values), structured transformation procedures (reading, writing, activities, events, collaboration, assessment), and specified outputs (grade advancement, standardized test performance, graduation readiness).
By identifying the assignment as a distinct system level, we can analyze how information flows between institutional abstractions and classroom realities while recognizing their embeddedness within complex regulatory frameworks.
Systemic Structure: Disjoint and Nested Variables
Disjoint variables are part of a complex system, but do not interact with one another. Imagine standing on the top of a mountain looking down the north slope. Then look down the south slope. Each perspective is a part of the mountain, but each slope is separate from the other. Nested variables are like concentric circles. Think about the mountain top itself. Looking down, you notice an array of plant life. You notice insects flying, crawling, and resting on rocks. You notice small rocks. The peak itself has these variables/categoris nested inside it.
Within the academic assignment system, multiple disjoint slopes operate at equal levels of abstraction. In this analysis I find five. You may find others. Textual Activities, Collaborative Learning, Expert Consultation and Input, Information Systems, and AI Interaction. These slopes are disjoint in Floridi's sense in that mastery of one doesn't automatically confer competence in another, yet they all serve the same institutional peak.
Each slope contains nested variables at varying levels of abstraction. Within Textual Activities, we find, for example, Assignment Brief → Drafting Process → Revision Cycles → Submission; Reading Human Text → Comprehension, Interpretation → Synthesis → Application; and Reading Artificial Text → Comprehension, Verification → Integration → Contextualization. The distinction between reading human and artificial text becomes crucial at the operational level, evoking different cognitive processes and critical evaluation skills, even though both fall under the broader textual activities domain.
Paradoxes at the Teacher Level
Teachers function as the long-suffering, expert interface between institutional assignment systems and individual student learning, practicing inside structural paradoxes that language machines amplify rather than resolve.
The Assessment Validity Paradox emerges because teachers can no longer distinguish between student-generated and machine-generated work. A ninth-grade history teacher assigning an essay on World War I faces a fundamental measurement problem. Large language machines with output properly read can produce historically accurate (verified), well-structured (rhetorical) responses that perfectly match assignment criteria.
Small language machines can help with legitimate tasks in ways that make detection of larger AI assistance nearly impossible. The teacher cannot fulfill the institutional requirement for valid assessment because the tools available to students have made the assignment's traditional measurement properties unknowable.
The Institutional Compliance Paradox forces teachers to simultaneously follow policies prohibiting unauthorized AI use while preparing students for a world where AI collaboration is professionally essential. Mrs. Fong teaches eighth-grade chemistry. Her district bans AI tools, but she knows her students will need computational fluency for high school AP courses and college labs.
When designing stoichiometry problems—a fundamental skill in chemistry because it helps students a) understand how much of each chemical you need for a reaction, b) how much product you'll get from given starting materials, and c) why reactions sometimes stop before all reactants are used—she faces an impossible choice.
She can teach students to balance equations manually (satisfying district policy) or show them how AI can check their work and suggest alternative approaches (preparing them for real scientific practice). Professional chemists routinely use computational tools to verify calculations and model molecular interactions.
Mrs. Fong cannot follow institutional rules and prepare students for authentic scientific work. A temporal mismatch where educational policies rooted in pre-AI assumptions prevent students from developing AI literacy essential for future STEM success is inevitable without intelligent human attention of the sort humans are famous for.
Four Paths for Resolving the Assessment Validity Paradox
Path 1: Restructure Assessment Around Process Documentation The institution could mandate that all assignments include documented evidence of student thinking processes, requiring ground-floor changes to assignment design, grading rubrics, and teacher training. Students would submit reflection logs, revision histories, and metacognitive analysis alongside their work. Validity shifts from "Did the student produce this?" to "Can the student explain their learning process?" Language machines can't document personal cognitive journeys fueled by human metabolism.
Path 2: Explicit AI Integration with Transparency Protocols The institution could formally incorporate language machine use while requiring complete transparency about AI assistance. Rewriting academic integrity policies, developing AI literacy curricula, and creating assessment frameworks that evaluate student-AI collaboration would be essential. Teachers would design assignments requiring effective AI prompting, critical evaluation of outputs, and synthesis of human and machine insights.
Path 3: Temporal Separation Through Controlled Assessment Environments The institution could maintain traditional assignment validity by creating AI-free assessment spaces alongside AI-integrated learning environments. Students would learn with AI assistance during instruction but demonstrate competence in monitored, controlled environments during assessment. Traditional measurement would continue under its present assumptions while accommodating a transformed learning processes.
Path 4: Competency Redefinition Through Observable Performance The institution could redefine demonstrable competence by requiring assignments to have real-world application, interpersonal interaction, or physical demonstration that language machines cannot do. Students would demonstrate learning through presentations to authentic audiences, collected evidence of collaborative problem-solving, or hands-on in-the-moment creation requiring physical presence and human judgment.
Four Paths for Resolving the Institutional Compliance Paradox
Path 1: Temporal Policy Differentiation The institution could establish separate AI policies for different educational phases, explicitly permitting AI collaboration in learning contexts while maintaining restrictions during formal assessments. This would entail detailed policy frameworks that distinguish between learning-phase AI use (encouraged for skill development, say) and assessment-phase AI use (regulated for validity). Teachers would receive clear guidelines about when to promote AI literacy and when to require independent work, resolving the contradiction between engagement and prohibition.
Path 2: Professional Standards Alignment The institution could align policies with emerging professional and academic standards in relevant fields, emulating industry-partnered curricula that explicitly teach AI collaboration as a professional competency. Local businesses, universities, and professional organizations who understand current AI integration practices could be tapped as resources to develop policies that mirror professional expectations rather than maintaining artificial restrictions.
Path 3: Graduated AI Integration Curriculum The institution could implement systematic AI literacy education that progresses from supervised AI interaction in elementary grades to independent AI collaboration in high school. The traditional assumptions of the gradual release of responsibility model of instruction would become the gradual assumption of agency.
Of course, comprehensive curriculum redevelopment would take time, teacher professional development would be absolutely crucial, and infrastructure would need investment. Students would learn to use small language machines for specific tasks in middle school, advancing to sophisticated large language machine collaboration by graduation, with clear competency benchmarks at each level.
Path 4: Community-Responsive Policy Development The institution could establish ongoing dialogue with parents, students, employers, and higher education institutions to develop AI policies that reflect community values and future needs. Formal feedback mechanisms would need to be built, a task which AI could assist.
Regular policy review cycles along the lines of departmental or grade-level self-studies could be useful, making decision-making processes that balance traditional educational values with technological reality clear. Policies would evolve based on evidence of student outcomes and community input rather than maintaining static prohibition.
Conclusion: The Value of Levels of Abstraction Analysis
Applying Floridi's Levels of Abstraction (LoA) method to understanding the classic assignment system puts on display how educational institutions operate as complex informational architectures where meaning and function emerge through multiple nested and disjoint relationships. This analytical approach prevents both reductionist collapse (treating assignments as simple teacher-student interactions) and abstract generalization (dissolving assignments into vague swampland).
The method illuminates how paradoxes emerge specifically at interface points between system levels. Teachers operate simultaneously as institutional agents and individual practitioners, creating structural tensions that cannot be resolved at the classroom level alone. By identifying the assignment as a distinct system-level abstraction, we can trace how information flows between institutional mandates and individual learning while recognizing that system-level problems like, say, the assignment construct, require system-level solutions.
Most crucially, the Levels of Abstraction analysis reveals that language machines have disrupted the informational properties of assignment systems at multiple levels simultaneously. Traditional educational institutions assumed stable relationships between student inputs, processing procedures, and measurable outputs. Language machines have permanently changed these informational relationships, requiring institutions to reconstruct their assignment systems with explicit attention to new forms of human-machine collaboration.
I'm not suggesting that any of the eight paths outlined above represent workable paths in the real world. I'm not suggesting that all schools are starting at ground zero. As I understand it, some private schools are making nuanced, important progress (not all by any means).
I am suggesting that institutions must reach a level of clarity about the embeddedness and implications of assignments, including outlining multiple possible pathways to include all stakeholders in the discussion. When I worked on the California Assessment of Teaching Performance in 2002, the development team made good use of a distinction between garden-variety "assignments" and "signature assignments" which are stable but modifiable across teachers, a notion I feel could be useful at this juncture.
Different approaches to rebuilding institutional coherence around assignment systems can function effectively in an AI-integrated school. Without such systematic reconstruction, individual teachers will continue shouldering the burden of resolving paradoxes that emerge from institutional design rather than pedagogical choice. This situation does no one any good, particularly not the ones on the front line, our teachers and learners.
Follow Up Discussions
Discussion Questions for Teachers
Question 1: Personal Practice Reflection Consider your most important assignments—the ones where student learning really matters to you. How has the presence of AI tools changed your relationship to these assignments? What specific moments of uncertainty have you experienced when evaluating student work? How are you currently distinguishing between legitimate AI assistance (like grammar checking) and problematic AI dependence? What would it mean for your practice to shift from evaluating products to documenting learning processes?
Question 2: Student Agency and AI Collaboration Think about agency as existing on a continuum rather than as binary ownership. How might you scaffold students to assume more agency in their learning while incorporating AI tools responsibly? What would it look like to explicitly teach students when AI collaboration enhances learning versus when it replaces thinking? How could you design assignments that require students to demonstrate both independent reasoning and effective AI collaboration skills?
Question 3: Professional Development and Support What professional learning do you need to navigate AI integration thoughtfully? How could your department or school create supportive structures for experimenting with new approaches to assignment design and assessment? What conversations with colleagues, administrators, parents, or students would help you develop more coherent practices around AI use? Which of the eight institutional paths resonates most with your teaching context, and what would implementation require?
Discussion Questions for High School Administrators
Question 1: The Validity Crisis in Your Building Think through your current assessment practices with AI tools in mind. Which of your signature assignments (those that all students must experience)—the ones that truly matter for graduation requirements and college preparation—could be completed entirely by a large language machine without detection?
Given that reality, how do you balance the institutional pressure for measurable outcomes with the growing impossibility of traditional validity measures? What would "restructuring assessment around process documentation" actually look like in your building, and what resistance would you expect from teachers, parents, and district leadership? What do you make of the wrinkle thrown in by the existence of SLMs and LLMs (small and large language machines)?
Question 2: Professional Preparation vs. Policy Compliance Your AP Chemistry, AP English, and AP Computer Science teachers come to you with evidence that their students are falling behind peers from neighboring districts who explicitly integrate AI tools into coursework. They argue that your current AI prohibition policies are academically disadvantaging students for college and career readiness.
However, your district maintains strict anti-AI policies, and your state assessment systems assume AI-free student work. How do you navigate this temporal mismatch between preparing students for their actual futures and satisfying current institutional requirements? What conversation would you like to have with your superintendent?
Question 3: Systemic Change vs. Teacher Burden Looking at the eight institutional paths outlined in the document, which ones could realistically be implemented at the building level versus requiring district or state-level policy changes? Your teachers are currently making individual decisions about AI use that create inconsistent experiences for students and potential liability issues for the school.
What systematic approach could you implement this academic year to reduce the burden on individual teachers while maintaining educational integrity? Of utmost importance, how would you involve students, parents, and community partners in developing policies that reflect both educational values and technological reality?
A good examination of AI instructional strategies in education.
My approach is to build something first, then demonstrate it. That is why I am starting with a “game” that won’t impact the workings of the institutional machine but could bring positive attention to application of these ideas