“I enjoy writing,” they said. “I like history. Hate math. English is ok. I’m really interested in learning about AI.”
I wonder how many high school students this voice represents? It’s real. I have it on tape.
These kids aren’t stupid.
This one recalled vividly the first time they tried AI. It came out on social media this big new thing was here freshman year. They and a friend had a free period, found a quiet place, and opened a screen.
Who did they go to first to spring this incredible news? Their English teacher!
I wonder how many students opened that screen and found out how joyful it can be to brainstorm with a bot? How innocent it can be? How fraught with risk it is?
How easy it is to get embarrassed in class because you forgot what you wanted and had to say for your group because the bot is so easy to forget its language is so full of fluffy flufffluff and you used it for an outline because you were in a time crunch?
How treacherous it is to offload thinking if the goal is to learn? Fluff in, fluff out.
Make no mistake. This hard line in the classroom is leaving scars.
Kids are afraid to ask questions. They are internalizing this scare campaign like the Sixties when people were jumping out of windows after smoking a joint.
Sadly, the opportunity to teach reading and writing has never been better. Metacognition is no longer the icing on the cake. It is the cake. Critical thinking is no longer the cherry on top. It is the ice cream. These are basic learning-with-a-bot skills.
Commanding a bot requires seeing around rhetorical corners which shapeshift word by word.
Students who willingly, genuinely commit to an academic ethical code feel guilty every time they use a bot.
Thanks for the poem link! Write more about this, Scott. I’d love to read your inside view. It’s so important right now
That has been a better discussion for a few years now - Why do you feel guilt? Or why not? The Fasano poem popped up (https://poets.org/poem/student-who-used-ai-write-paper) and it was an interesting poetic entryway into this discussion about what free, fast, shallowly- accurate language models offered in 2023 or 2024. It's still a good discussion, but we're at the point where the Fasano poem is known well enough that a very honest, literate student sent it to me this year after watching the entire rest of her class use GPT to breeze through an assignment she chose to think about without LLM's and reason through independently.
I'd even go back to "willingly, genuinely commit" - that's a great concept worth discussing with students as a question, existentially, and one I've been trying to push more when an ethical discussion comes up. What's your goal, for how long, what end. Why try hard, or sometimes even at all, when these tools just zip through "the busywork"? I do appreciate the offloading cognition research and subsequent way to ground the "you'll get stupid and end up in WALL-E" response a number of my students said this year, too. A good time to talk "scars".