“Chechitelli is seeing an explosion of interest in her services and AI detection in general. A survey is conducted every year of teachers’ top instructional challenges. In 2022 ‘preventing student cheating’ was 10th, she says. ‘Now it’s number one.’”
Annie Chechitelli is Turnitin’s chief product officer, here being cited by PC Mag on July 6, 20231. When ChatGPT premiered in November, 2022, she noted, she like many educators who subscribe to her service was skittish about rampant cheating undetected and rewarded with course credit. But she now believes AI text submitted to deceive can be easily detected. John Renaud who founded his own AI company agreed:
“‘With predictive AI, we’ll always be able to build a model to predict it,’ he says. In other words, the current generation of autocomplete-on-steroids algorithms will always be deterministic enough to have tells.”
It only makes sense. ChatGPT learns cluster algorithms during unsupervised training on a corpus of prepared texts to create stable and predictable patterns in words and wordparts or particles. The production looks like an essay, it may fool a charitable reader some of the time, but the bot has no idea what it is saying. The pregnant pauses followed by a road runner cursor blasting across the screen, pausing, then firing off a clause. It’s as dumb as a post. Its words are estimates of the next word much like home values generated by Zillow are estimates based on regression formulas.
So I’m going to ask the bot what it thinks. Give me two minutes and I’ll give you its response. Ok. Here it is:
I read somewhere that some teachers have started collecting on-demand pieces of writing on the first day of class not to use for a quick and dirty assessment, but as evidence should they need to prosecute an indictment of plagiarism. I’m wondering how large of a sample “first day text” a bot would need to make a digital thumbprint of each student’s syntactic, semantic, and grammatic idiolect nested within an inferred or self-reported dialect to use as a basis for comparing upcoming pieces to produce an academic integrity score.
The Russell Group of universities in the UK, an association formed in 1994 to connect 24 of the most prestigious institutions of higher learning in the world, is a bit divided on what to do about breaches of academic integrity caused, allegedly, by the availability of large language bots. Including notables like Oxford, Cambridge, and the University of Edinburgh, the academic cultural consequences over time wrought by the bot ban will be interesting to watch:
“Eight out of 24 of the elite Russell Group universities have informed students that using the AI bot for assignments will count as academic misconduct, including Manchester, Bristol, Edinburgh and Oxbridge. Dozens of other universities across the country, including Durham, Liverpool and Northampton, are scrambling to review their plagiarism policies in time for this year’s assessments, after ChatGPT exploded onto the scene in November.”
Once again—as I did in the post about mosaic plagiarism a week or so ago—I asked ChatGPT for advice. Are we fiddling while Rome burns? How do we change the trajectory on plagiarism in writing classrooms? I’ll give the bot the last word:
https://apple.news/ARNNE2mWYR8u-nlS_pBKRPw