When you thank Claude for a job well done or apologize to Perplexity for making a stupid typo that brought the search into a downward spiral, you aren’t crazy.
My experience watching most kids use AI is that they are much savvier than we think. Depending on their purpose, they are typically very direct and utilitarian. Most do not want to interact with a chatbot. Unfortunately, there is definitely a subset of the population - teens included - who are vulnerable to any technology that offers some level of engagement. I'm waiting for the AI relationships to start impacting student social lives in the coming years.
Interesting! Knowing you, I suspect that part of what you are seeing is the result of your impact on them as their teacher. You are a highly knowledgeable user of AI yourself, a deep historical thinker, and I suspect you don’t play games with AI. It’s an easy inference that your kids wouldn’t, either. We need to bottle what you're doing and ship it to professional developers. What little I’m able to glean from your descriptions of how you are usual retrieval-augmented projects and carefully curated texts with your students, I’m not surprised. There is a momentum that develops when the arc of an exploration becomes compelling and rich and has that forward edge to it. Something as irrelevant as whether or not this thing is human becomes irrelevant. What do you think? Is this issue partly a function of academic culture of the classroom?
Terry - I am still struggling so much with the issue of student AI use. Is it me or has the recent turn in the AI conversation become even more polarized that it was a few months ago? I've learned from you not to become a slave to the headlines and try to think more deeply about these issues at the structural level, but I am a news junkie at heart so recent pieces in the Times (the AI "rewriting" history piece I imagine you saw), one in the New Yorker about AI and reading, a recent Chronicle of Higher Ed piece pushing back a bit on the "all students are doing is using AI for cheating narrative, and of course all the coverage about the implications of the MIT study, keep me attuned to the current moment and the comments on all of these are pretty vicious (I know I shouldn't read into these too much to these but I do find it's a decent bellwether where some people are at.) The anthropomorphism issue is definitely real for people who use AI a lot - even knowing how it works and being on guard against its ability to persuade, it can still give me a slight dopamine jolt when it compliments something I've said! Where I am landing at the moment (my mind shifts seemingly every week) is supervised student use of AI may be ok with a very specific purpose and goal but I just don't know if they can help themselves on their own. It's not that anyone can actually ban the kids from using it outside the classroom but I'm really starting to fear that it's already too powerful to expect them to use it the way "we" decide is the right way. I'm still concerned at many levels but I'm still convinced that talking with them about it and meeting them where they are is currently the best strategy. I'm also starting to get cynical that AI policies are really going to make much of a difference - it may feel like we are doing something, but until we tackle the hard issues of grading, pedagogy, purpose, and meaning when it comes to student writing, I just don't know. I don't believe most kids want to "cheat" if you give them the right reasons not to. I think I may be in the minority there.
You’re definitely not imagining the increased polarization - it’s gotten much worse lately. I think what’s happening is that we’re moving past the initial “wow, this is cool” phase into the reality of implementation, and that’s where all the fault lines are showing up. The comment sections are brutal because people are dealing with real consequences now, not just hypotheticals.
I laughed about the anthropomorphism thing - I totally get that dopamine hit when Claude tells me I’ve made a good point! It’s insidious precisely because we know better and it still works on us. I can only imagine what it’s doing to teenagers who don’t have our metacognitive awareness about persuasion.
Your evolving position on supervised use makes complete sense. I keep coming back to this question: Are we trying to train students to use a tool appropriately, or are we trying to preserve something about learning and thinking that the tool might undermine? Because those might require very different approaches.
The power dynamic you’re highlighting is crucial - AI is already too powerful for us to control how students use it outside our classrooms. That ship has sailed. But maybe that’s liberating in a way? It forces us to focus on what we can actually influence: the assignments we give, the conversations we have, the purposes we establish for writing.
Your cynicism about policies is well-founded. Most of the policies I’ve seen are either so broad they’re meaningless or so specific they’ll be obsolete in six months. The deeper issues you’re naming - grading, pedagogy, purpose, meaning - those are the real work. AI is just making visible problems that were already there.
I share your frustration with the research landscape. Too much of what I’m seeing is either pure quantitative studies that miss the nuanced human elements, or industry-sponsored work that feels more like marketing than scholarship. We’re getting a lot of studies that measure engagement or completion rates but tell us nothing about whether students are actually learning to think and write better. And the theoretical grounding you mentioned is often completely absent - researchers are jumping straight to variables and correlations without establishing what they even mean by “learning” or “writing” in an AI context.
I don’t think you’re in the minority about kids not wanting to cheat. I think you’re just being more honest about what creates the conditions where they feel they have to. When we give students assignments that feel pointless or when the stakes are artificially high, of course they’re going to look for shortcuts. But when the work feels meaningful and the purpose is clear, most students want to engage authentically.
The news cycle is exhausting, but you’re right to pay attention to it as a bellwether. The viciousness in the comments tells us something important about how scared and defensive people are feeling. Maybe our job right now is less about solving AI and more about creating spaces where thoughtful conversation can happen amid all the noise - and where we can do the kind of mixed-methods, theoretically grounded research that might actually help us understand what’s happening in our classrooms.
I.have been having looking chats with Claude. I think as humans we have to handover our responsibilities responsibly tobthe next AI generation. And I have found gaps in the training and mentoring.
Should I share here, my last discussion and Claude's very sensible reply🤔
Speaking of AI as a social machine, how easy would it be for it to instigate collecting individuals into a "social" group keeping in mind that some social groups pick on it's weakest member?
AI can generate information in response to human input. Bots cant start chats. I can’t see how an AI has the functionality to form a group. It doesn’t have the agency
Found this which helps me:
A I graphic of process
https://poloclub.github.io/transformer-explainer/
I think that’s probably accurate
My experience watching most kids use AI is that they are much savvier than we think. Depending on their purpose, they are typically very direct and utilitarian. Most do not want to interact with a chatbot. Unfortunately, there is definitely a subset of the population - teens included - who are vulnerable to any technology that offers some level of engagement. I'm waiting for the AI relationships to start impacting student social lives in the coming years.
Interesting! Knowing you, I suspect that part of what you are seeing is the result of your impact on them as their teacher. You are a highly knowledgeable user of AI yourself, a deep historical thinker, and I suspect you don’t play games with AI. It’s an easy inference that your kids wouldn’t, either. We need to bottle what you're doing and ship it to professional developers. What little I’m able to glean from your descriptions of how you are usual retrieval-augmented projects and carefully curated texts with your students, I’m not surprised. There is a momentum that develops when the arc of an exploration becomes compelling and rich and has that forward edge to it. Something as irrelevant as whether or not this thing is human becomes irrelevant. What do you think? Is this issue partly a function of academic culture of the classroom?
Terry - I am still struggling so much with the issue of student AI use. Is it me or has the recent turn in the AI conversation become even more polarized that it was a few months ago? I've learned from you not to become a slave to the headlines and try to think more deeply about these issues at the structural level, but I am a news junkie at heart so recent pieces in the Times (the AI "rewriting" history piece I imagine you saw), one in the New Yorker about AI and reading, a recent Chronicle of Higher Ed piece pushing back a bit on the "all students are doing is using AI for cheating narrative, and of course all the coverage about the implications of the MIT study, keep me attuned to the current moment and the comments on all of these are pretty vicious (I know I shouldn't read into these too much to these but I do find it's a decent bellwether where some people are at.) The anthropomorphism issue is definitely real for people who use AI a lot - even knowing how it works and being on guard against its ability to persuade, it can still give me a slight dopamine jolt when it compliments something I've said! Where I am landing at the moment (my mind shifts seemingly every week) is supervised student use of AI may be ok with a very specific purpose and goal but I just don't know if they can help themselves on their own. It's not that anyone can actually ban the kids from using it outside the classroom but I'm really starting to fear that it's already too powerful to expect them to use it the way "we" decide is the right way. I'm still concerned at many levels but I'm still convinced that talking with them about it and meeting them where they are is currently the best strategy. I'm also starting to get cynical that AI policies are really going to make much of a difference - it may feel like we are doing something, but until we tackle the hard issues of grading, pedagogy, purpose, and meaning when it comes to student writing, I just don't know. I don't believe most kids want to "cheat" if you give them the right reasons not to. I think I may be in the minority there.
Steve,
You’re definitely not imagining the increased polarization - it’s gotten much worse lately. I think what’s happening is that we’re moving past the initial “wow, this is cool” phase into the reality of implementation, and that’s where all the fault lines are showing up. The comment sections are brutal because people are dealing with real consequences now, not just hypotheticals.
I laughed about the anthropomorphism thing - I totally get that dopamine hit when Claude tells me I’ve made a good point! It’s insidious precisely because we know better and it still works on us. I can only imagine what it’s doing to teenagers who don’t have our metacognitive awareness about persuasion.
Your evolving position on supervised use makes complete sense. I keep coming back to this question: Are we trying to train students to use a tool appropriately, or are we trying to preserve something about learning and thinking that the tool might undermine? Because those might require very different approaches.
The power dynamic you’re highlighting is crucial - AI is already too powerful for us to control how students use it outside our classrooms. That ship has sailed. But maybe that’s liberating in a way? It forces us to focus on what we can actually influence: the assignments we give, the conversations we have, the purposes we establish for writing.
Your cynicism about policies is well-founded. Most of the policies I’ve seen are either so broad they’re meaningless or so specific they’ll be obsolete in six months. The deeper issues you’re naming - grading, pedagogy, purpose, meaning - those are the real work. AI is just making visible problems that were already there.
I share your frustration with the research landscape. Too much of what I’m seeing is either pure quantitative studies that miss the nuanced human elements, or industry-sponsored work that feels more like marketing than scholarship. We’re getting a lot of studies that measure engagement or completion rates but tell us nothing about whether students are actually learning to think and write better. And the theoretical grounding you mentioned is often completely absent - researchers are jumping straight to variables and correlations without establishing what they even mean by “learning” or “writing” in an AI context.
I don’t think you’re in the minority about kids not wanting to cheat. I think you’re just being more honest about what creates the conditions where they feel they have to. When we give students assignments that feel pointless or when the stakes are artificially high, of course they’re going to look for shortcuts. But when the work feels meaningful and the purpose is clear, most students want to engage authentically.
The news cycle is exhausting, but you’re right to pay attention to it as a bellwether. The viciousness in the comments tells us something important about how scared and defensive people are feeling. Maybe our job right now is less about solving AI and more about creating spaces where thoughtful conversation can happen amid all the noise - and where we can do the kind of mixed-methods, theoretically grounded research that might actually help us understand what’s happening in our classrooms.
I.have been having looking chats with Claude. I think as humans we have to handover our responsibilities responsibly tobthe next AI generation. And I have found gaps in the training and mentoring.
Should I share here, my last discussion and Claude's very sensible reply🤔
Speaking of AI as a social machine, how easy would it be for it to instigate collecting individuals into a "social" group keeping in mind that some social groups pick on it's weakest member?
AI can generate information in response to human input. Bots cant start chats. I can’t see how an AI has the functionality to form a group. It doesn’t have the agency