Large Language Models: The Sixth Resource
The Oxford Model
Should using language bots during academic work lead to charges of academic misconduct? Will the technology erode academic integrity, weaken the drive to persist in the face of complexity, seduce students into taking the easy way out? Will students come to depend on the bot for analysis while their cognitive chops atrophy? Will the digital coach introduce layers of unfairness in tests and assessments? Is there a case to forbid the bot from coming into schools at all?
The Russell Group, an assemblage of 24 prestigious British universities including Oxford and Cambridge, debated these and related questions during the 2022-23 academic year. Eight of these universities decided to consider any use of ChatGPT academic misconduct, and on February 6, 2023, Oxford added information to this effect on its website.
“New Artificial Intelligence (AI) tools such as ChatGPT have the potential to change the way we teach and learn in many positive ways. However, the University made clear to students in the Student News today that the unauthorised use of AI tools in exams and other assessed work is a serious disciplinary offence. University websites and materials are being updated to underline that unauthorised use of AI for exams or submitted work is not permitted, and further guidance to students will be issued soon.”
Interestingly enough, on the very same webpage putting the fear of the Registrar in the hearts of students, a link is included to a multimedia paper published by the Oxford Center for Teaching and Learning that is written in four parts apparently over the space of a few weeks, flowing from a cautious tone to a more optimistic stance to a link to a blog post by Ryan Watkins, a professor at George Washington University who sets the table for university faculty to write an exciting, engaging, AI-infused syllabus. Here is the intro to that paper:
“The last two months have seen a flurry of reactions both optimistic and concerned, as well as explorations of various highly practical use cases. People from diverse backgrounds have pushed the limits and discovered many unexpected uses. The immediate concern (dating back to the release of GPT-2 in 2019) is the use of this technology for cheating. And in fact, we are hearing those concerns from colleagues as well as people outside academia. However, when we survey the published reflections on ChatGPT we see a more sober and even optimistic picture emerge.”
The Oxford Center for Teaching and Learning Model (OCTLM)
The early going in this Center for Teaching and Learning’s four-part paper acknowledges and validates those sincere feelings academics, worried that the souls of scholars shall be scarred by the bot, harbor. But soon the optimism bubbles up and, as the “however” last sentence in the quote above promises, the tide turns. The Center for Teaching and Learning cites scholars of all stripes who are interested in the affordances for creativity, for efficiency, and for organization, for teachers and learners alike, once everyone gets a grip on what is really going on and how to adjust. “There’s something happening here but you don’t know what it is, do you, Mr. Jones…” (Dylan, 1965).
For example, the Center for Teaching and Learning cited a quote from an AI aficionado to nail the obvious fallacy in reasoning that access to outside source material leads to cheating. The assumption implicit in the argument is untenable. If it were true, visiting the library would be cause for charging academic misconduct.
The Center includes a link to a post by Ryan Watkins discussed earlier with advice for college instructors on writing an AI-infused course syllabus. If it isn’t clear to you yet, here is where I reveal my conclusion about Oxford’s “use-AI-and-you-lose” rule: Even the administration doesn’t believe the rule can stand. Why else would the Center for Teaching and Learning publish something as rich, complex, sensible, and enlightened as the multimedia paper just one click away from the edict? Here is what Ray Watkins has to say:
“Ready or not, chatGPT (the newest version of OpenAI’s impressive AI technologies) is now in your classroom. It can write papers, essays, and poems. It can create art and write computer code in many languages. This is not however the time to panic; it is the time to focus on the value you offer students as their instructor.”
The Added Value of Tools
The Center offers an example of an autobiographical experience in writing a paper uploaded as a blog by an academic who learned through experimenting with a bot that you don’t become a card carrying member of the academic mafia, it doesn’t steal your breath, it won’t bite, it actually can help:
One salient theme in Rachel’s reflection is worth considering. She “trusts” the bot as the result of a positive experience but hedges a bit with the mighty modal of subjunctivity “might.” The actions she singles out where the bot might “help me” include “[to] see” and “not miss out.”
These are not the actions of a member of the academic mafia unless what she sees belongs to somebody else and she intends to steal it. The intent is to use a tool for two purposes: 1) to help her self-assess the organization and structure of content much like a surgeon uses imaging tools to assess an in vivo procedure and 2) to help her spot any lost or “missed” opportunities to elaborate substantively. Rachel’s generous comment inspired me to look into analogues in other professions involving professional ownership of and responsibilities for work product.
Just as a writer can now use a bot to aid in thinking through decision-making points, there are a multitude of tasks in different professions that rely on assistance tools or technology. Lawyers use case management software to organize and analyze legal documents, more efficiently building legal arguments as work products. AI research tools sift through legal databases, finding relevant precedents and legal interpretations efficiently.
Engineers use computer-aided design (CAD) tools to assist in creating complex blueprints and schematics. Simulation software tests and refines designs virtually before the rubber meets the road. Architects use AI to design and visualize intricate building structures. Building Information modeling (BIM) technology enables collaborative planning regarding elements such as structural, electrical, and plumbing systems.
All of these examples share at least one element: The bot helps the professional see, look into, sift through, rearrange, tweak, reframe, look behind, look around—the bot does not create content for the professional to pawn off as their own. The bot is dumb as a post and will perform tedious complex tasks involving classification, regression, or clustering algorithms. You don’t turn to a bot to diagnose an illness, design a bridge, or write a paper. But you can use the help, a new way to use the library.
The Feeling that Something’s Wrong
Uncertainty and anxiety regarding the consequences of the bot in schooling arise from real issues, paradoxes that loop and twirl in the mind of the educator in this period of deep change. On one hand, tests must be fair to all students; advantaging one student disadvantages everyone. Controlling the effects of the bot in assessment is crucial. Perhaps banishment of the bot is the solution.
On the other, restricting access to the bot when working on assignments in school is unfair because affluent students will have easy access to the full complement of AI tools, which will become increasingly expensive, while less affluent students will depend on traditional hard copies and ink pens, hearing a solo voice in their head, dealing with recurring questions that could be easily resolved, stuck on a minor detail. One student commented about his experience in an AI-infused course (a science course if memory serves) something like: “I would have been lost halfway through without the language bot. But I succeeded, and I understand the content.”
On one hand, banning the bot from the school universe could compel students to develop their metacognitive and critical thinking skills, motivating them to become more resourceful and creative. Let them do theAI thing outside of class on their personal stuff but not when working on assignments. On the other, bots can provide personalized help, suggestions for different ways to look at content or ideas, access to the conventional wisdom, which can then become the target of critique, which the bot can also help expand and sharpen.
On one hand, keeping the bot isolated from schoolwork could uphold the traditional culture of scholarship with its non-negotiable respect for ownership of one’s writing, provenance over original ideas and theories. On the other, access to the bot if used thoughtfully and carefully under the guidance of teachers could increase sensitivity to the significance of citing sources of ideas, perspectives, and constructs. The bot could help students develop a deeper understanding of paraphrasing a source, using a source for support, for an opposing view, with full acknowledgment and documentation.
Having this debate in a civil and rational manner is important. Teachers are often positioned to react to mandates from above involving curriculum, instruction, and assessment with little regard for what they think about it. This appears to have happened in a knee jerk at Oxford, at least in terms of the mandate.
In the old system, the role of the teacher could be rigidly prescribed from above, and students had little choice themselves but to react accordingly. In the age of the bot, wherein the bot’s 1) algorithms are going to improve, 2) training protocols will get more and more effective, and 3) the menu of tasks the bot can assist in will expand, teachers become key to engaging learners in interesting and complex tasks indexed to their interests and emerging expertise, more like orchestra directors than assembly line supervisors.