Discussion about this post

User's avatar
Stephen Fitzpatrick's avatar

One point you might consider here, Terry, is the effect of providing certain texts to the LLM on which to base its responses. Otherwise, of course, it draws from its well of training data which can be, depending on the topic, quite vast or quite narrow. In the American Revolution example, unless it's a research assignment, students are likely to have been assigned certain texts (a textbook chapter, perhaps some primary sources, a video or a website, timeline, etc....) on which to base their response. There are possibilities here to restrict the LLM to use only the data provided to the student so as not to overwhelm them. Additionally, with the use of a pre-set GPT or other set of instructions, the teacher can constrain the output from the LLM in ways they might prefer. In essence, when you combine some of the different capabilities of the tools, the potential for different ways to impact the writing process becomes more complex.

Expand full comment
Becoming Human's avatar

There is a lot here! But extremely thoughtful.

The two takeaways I am ruminating on:

1. Writing as “output-production” versus a metacognition exercise (with the emphasis on exercise)

2. The recursive problem where any proof of work can always be faked because it is all word-generation, whether it is actual prose or a record of cognition.

I go back to my time at Oxford reading papers aloud while being challenged by the prof. The paper didn’t mean squat, it was just a rhetorical substrate for discourse.

Expand full comment
3 more comments...

No posts