14 Comments
User's avatar
Malcolm J McKinney's avatar

Thanks Terry.

Expand full comment
Malcolm J McKinney's avatar

One more: If students are using AI to write assigned papers they should be required to provide 3 annotated references. Art and music are different kinds of creations. Can AI store copyright info and secure it?

Expand full comment
Matt Renwick's avatar

Interesting post and observation, thanks for sharing Terry.

I recently attended a conference session on AI in the ELA classroom by Dr. Troy Hicks. During a turn and talk, I asked the English teacher next to me where she was with using the technologies with the students. "I don't trust them," she said. "The chat bots?" I asked. "No," she responded, "the students."

I wonder if that is where this fear is coming from: a lack of faith and understanding in how the students can use this technology to augment their writing and thinking. Instead of being curious, some educators would rather dismiss this innovation than change their practices (especially releasing more control of the learning over to the students).

I didn't disrupt her deficit thinking, but I did note that it was great she was at the session. I wonder if this university administrator's letter is a first step in their process of understanding what this is and can become.

Expand full comment
Terry underwood's avatar

I think so, Matt. If a person has little experience with a bot, the most striking thing is its ability to write. If my understanding led me not to trust the student, that would be evidence I don’t understand the affordances of AI. That’s so funny. I don’t trust them. Who? The bots? Great title for a post

Expand full comment
Matt Renwick's avatar

"A.I. or Students: Who do you trust in the ELA classroom?" Funny that I am referring to both as "who"... :-)

Expand full comment
Terry underwood's avatar

You've got a compelling essay here, my friend. If you play that out to its logical conclusion (reductio ad absurdum), you reach a point where the problem for learners is the lack of trust in them on the part of the teacher. I don't think I could teach if I didn't trust the students. Maybe I'm naive, but I don't think so. I don't trust them to read the textbook--have a multiple choice quiz to reward and punish compliance. I don't trust them to think for themselves if they use AI--could this mean I have a problem with intrinsic motivation? Might I try to integrate more choice into the classroom where I can? Might I give them a choice to experiment with a bot?

Expand full comment
Matt Renwick's avatar

Good point Terry. Teachers would benefit from examining their beliefs and values when they lack trust and are experiencing fear.

Expand full comment
Terry underwood's avatar

Examine fears first? What are your worst fears as a teacher when you're teaching reading and writing? What's the relationship between trust and fear? What do you trust your students to do? What do you have some mistrust in? How do students feel when they sense they are not trusted? Does that lead to fears in students? Does that not lead to mistrust in the teacher? SO teachers mistrust students because they fear students won't do their work and mistrusting the students needs to fear of the teacher. The relationship between trust and fear seems somehow central to teaching.

Expand full comment
Malcolm J McKinney's avatar

How would you test AI?

Write both for and against arguments for a specific topic?

Expand full comment
Terry underwood's avatar

I’m not sure I understand your question, Malcolm. I know that AIs are evaluated based on milestones. Many of these tests compare bot performance with human performance. Is that what you mean? For example, natural language bots could not disambiguate pronouns as well as humans until 2019. For many years that was a milestone until it was passed.

Expand full comment
Malcolm J McKinney's avatar

Using the old code writing quip, garbage in garbage out.

AI must depend on how one talks ti it.

Expand full comment
Terry underwood's avatar

Exactly

Expand full comment
Malcolm J McKinney's avatar

What might happen if, say, a L eft or Right conspiracy nut asks AI to prove their opinion?

Expand full comment
Terry underwood's avatar

The bot is designed to generate output. I suppose a person could prompt AI to “prove” anything given the right training of a bot. But proof is a human prerogative. The bot deals in information and massages it according to computational logic. Any output used in an argument must be verified by a human, either in making or breaking an argument. It would be like blaming a shovel because the grave wasn’t deep enough

Expand full comment