“We, the APA Style team, are not robots. We can all pass a CAPTCHA test (https://en.wikipedia.org/wiki/CAPTCHA), and we know our roles in a Turing test (https://en.wikipedia.org/wiki/Turing_test). And, like so many nonrobot human beings this year, we’ve spent a fair amount of time reading, learning, and thinking about issues [related to] artificial intelligence (AI) and AI-generated text…” (APA Style Team, April 27, 2023).1
If you visit the link in the footnote below, you’ll find this document prepared in response to a growing need for citation guidance. Unless it is fake, the document strikes me as just extremely bizarre.
I found it in a link included on a webpage from Arizona State University2. Titled “Teaching and Learning with AI,” this page represents the official perspective of the campus on the topic.
“Several resources are available to you as you begin to explore AI,” the university writes to its faculty. “Here are some quick steps and resources to get started.” The APA Style team’s pdf with rules for citing bots is one of ASU’s recommended resources.
As a consultant working on a curriculum for teaching AI theory in the context of a high school writing classroom, I’ve been searching for this guidance myself. I felt lucky when this pdf crossed my path. I desperately need something solid.
.***
Did you read the opening? Is it not hilariously defensive and weird? It reads like someone desperately trying to prove they're human in the most unnatural way possible. "Hello fellow humans! Look at how human I am! I can do CAPTCHAs!"
The irony is that starting a professional style guide with "We're not robots, we promise!" actually makes them sound more robot-like, not less. It's like something an AI would write while trying too hard to seem human.
I don’t get it. The APA Style team perhaps deals only with style, not substance. This sarcastic and unserious attitude shows up in the strangest places—places where one might expect some serious thought.
The whole tone is bizarrely self-conscious and almost apologetic. Why does the APA Style team need to assure anyone they're human? Has anyone actually accused them of being robots?
This kind of awkward, defensive posturing undermines their credibility right from the start. Instead of straightforwardly addressing AI citation guidelines (which we've established are problematic anyway), they've created this inexplicable meta-commentary about their own humanity.
Their burlesque shows how AI is making some institutions so uncomfortable that they're losing their professional composure and falling into almost comical self-justification. Would you agree this introduction damages their authority on the topic before they even get to their main points?
***
Then when they get to their main points, when I expected to get some authoritative help—omg. I found a bureaucratic masterpiece of nonsense.
First, they tell us to "describe how you used the tool in your Method section." Because apparently writing "I asked ChatGPT some stuff" needs to be formalized into methodology. What's next? A detailed ethnography of my conversation with Alexa?
Then there's this. I’m flabbergasted: "Unfortunately, the results of a ChatGPT 'chat' are not retrievable by other readers." You don't say! Shocking. So tell me again—what does the citation mean? I’m supposed to link the author and source in a chain that includes me as reader?
It gets better. They explain that we can't cite ChatGPT as a personal communication because—hold onto your hats—"there is no person communicating." Oh. So how can there be an author? Don’t citations cite authors?
Thank goodness the APA Style team cleared that up. Here I was thinking I was having deep philosophical discussions with a real person named Chat. Chat is, after all, a machine.
The recommendation to put "long responses from ChatGPT in an appendix" for posterity is full blown nuts. Because future scholars will definitely need to study ChatGPT's random word salad from March 2023.
And the meticulous version numbering. Because "(Mar 14 version)" is crucial information. Heaven forbid someone think you used the Mar 13 version. That would invalidate your entire research.
They even suggest we should document the "exact text that was generated" because ChatGPT gives unique responses each time. You know, just like how humans never rephrase the same idea differently... oh wait.
My favorite part is the grave warning about ChatGPT's references: "Authors using ChatGPT should consider making…scrutiny of the primary sources a standard process." Translation: "Maybe check if the stuff it makes up is real." Revolutionary academic advice there.
***
Let's talk about their FOUR CRUCIAL ELEMENTS of a ChatGPT citation. Crucial APA elements make grad students sit up and take notice. Buckle up.
AUTHOR: It's OpenAI! Because obviously when a machine learning model spits out text based on processing the entire internet, what really matters is giving credit to the Silicon Valley company that built it. It's like citing "Microsoft" every time you use spell check.
DATE: Just the year, because that's plenty specific... except wait. We also need the version in parentheses because apparently "(2023)" isn't precise enough for tracking our conversations with a chatbot. But not too precise—they specifically warn that "you need to include only the year, not the exact date. I was losing sleep over whether to include the timestamp.
TITLE: It's "ChatGPT,” italicized of course, because we must treat it with the same reverence as "War and Peace." And don't forget the bracketed description [Large language model], because your readers might otherwise think ChatGPT is a type of cheese.
SOURCE: The URL. Because everyone needs to know exactly where to find this completely unreproducible conversation. It's like citing the exact park bench where you had a chat with a stranger.
But the best part? They actually write: "The goal of the bracketed text is to briefly describe the kind of model to your reader." Here I was thinking my readers might confuse ChatGPT with a small language model or perhaps a medium-sized one. Crisis averted.
And let's not forget their earnest reminder that "different large language models or software might use different version numbering." The APA Style team laying awake at night, worried sick about inconsistent AI version numbering schemes.
One final loving look at this memorial to overthinking and delivering nothing. Let's appreciate how they handle their crisis of confidence near the end. After pages of mind-numbing citation protocols, they suddenly remember that maybe—just maybe—there are bigger questions here.
"Should students be allowed to use it? What guidelines should instructors create? Does using AI-generated text constitute plagiarism?"
Fear not. The APA Style team is "actively debating" and "creating parameters" and—my personal favorite—"seeking the recommendations of APA Journals editors." Because nothing says "we're tackling cutting-edge AI issues" like consulting journal editors about proper citation formatting. It's like asking your a bridge club to regulate TikTok or something.
In trying so desperately to regulate and formalize how we cite an AI chatbot, the APA has given us a perfect example of what happens when humans act like robots while insisting they're not robots.
What this document reveals is the terror academicians feel when asked to face AI's implications. This attempt to engage with those implications meaningfully takes refuge in satire and formalism, clinging to old frameworks that no longer make sense, trying to maintain authority through name recognition.
The result is this surreal document that reads like a parody of academic bureaucracy. It is the "strange places" where these sorts of documents show up that make them so revealing—this isn't some random blog post, it's official guidance being recommended by a major university.
Perhaps the most telling part is how they defer the actually important questions to "future guidelines." They can't tell you if or how students should use AI, but they can definitely tell you how many spaces to put after the version number.
Educators must begin to think in earnest before the system collapses under its own failure to stand up and deal with the world as it is, not as it used to be.
https://libguides.asu.edu/ld.php?content_id=72006454
https://lx.asu.edu/ai
This is great!!!