Marshall McCluhan famously quipped that post-medieval Europeans, after Gutenberg, saw more ancient texts published and available in local bookstores in the 16th century than anyone alive during the Middle Ages. Publishers have always catered to public interests, those interests bend toward whispers of the forbidden, ancient manuscripts buried in thick-walled libraries high above sea level—something salaciously spiritual inside those rare books thereby tempted ordinary people. Suddenly, Gutenberg’s press printed these rare books relentlessly for a century or more and sold them to readers from all walks of life. Anybody could have these books.
A conference held by the Library of Congress Center for the Book [chaired by historian Daniel Resnick (1980), who by the way was a thought leader for the New Standards Project 1991-1996] brought together educators and academics studying literacy to teach government officials vis a vis imparting historical knowledge of reading, a topic deserving attention since the practice appeared in human society some 5,000 years ago. Policymakers were searching for a beam on where the country should be heading along the grand arc of time. The substance of the next few paragraphs derives in part from Resnick’s edited volume of the conference papers. Download the pdf below.
*****
Scholars of the history of literacy have argued that the printing press itself was a necessary but insufficient cog in the unprecedented spread of alphabetic literacy throughout Europe in the 16th century, especially Sweden in the 17th century (cf: Gee, 1996), essentially seeing the printing press as just one piece of a black box labeled literacy (Latour), invisibly guiding the collective brain in shared acts of reading across time, until now we see a global, digitized black box. Certainly, no one in the 15th century would have thought the printing press would show up in the Bill of Rights as a guaranteed Constitutional freedom. Gutenberg, like Steve Jobs, saw a massive wave of collective motive to read old new books, to facilitate trade across regions, and he did what practical visionaries do: He imagined a way to make books accessible to humanity. He helped humanize reading.
Gutenberg invented his fame in response to a historical urgency for wider access to ancient manuscripts. People wanted to read manuscripts kept beyond their reach; therefore, the printing press was invented as a tool to open library doors. The printing press didn’t open those doors, however—books did. A human invented a tool to bring the solution to the task of the times and profited from his invention. Printing presses could easily have ended up in the dustbin of history if European societies had scoffed at it.
What was worth printing? What books were the people interested in owning or borrowing?Ancient manuscripts. Sacred manuscripts. After a thousand years of manual labor in dark and dingy places religious clerics devoted to preserving precious ancient manuscripts, hot-off-the-press books became knowable to people who longed for magic. The printing press made it possible to read books. During the 16th century it produced frigates called books for the masses to rehash the meaning of ancient times. Then all hell broke loose. Paradigms started twisting into new shapes.
Technology was not the reason for the Renaissance; human imagination created tools to generate more, better, and accessible stable, peer-reviewed knowledge and expertise as well as exquisite aesthetic productions. Leonardo’s writing backwards in his journals, using words and sketches to store and propel his thinking, is a powerful example of analogue artificial cognitive assistance. What would Da Vinci have done with an iPad? The Age of Enlightenment in the 18th century fostered ideological cosmic paradigms of inequity that, unfortunately, ushered in slavery, an Industrial Revolution, two World Wars, the Atom Bomb, and a fragile structure for world peace now undergoing a stress test.
Now we have AI.
Pent up collective demand some simulacrum of intelligence?
*****
The reluctance of teachers to welcome AI into the classroom is easy to understand. Just a week ago I talked with a Sac State former colleague about a serious case of self-defeating and tragic plagiarism. This professor had assigned a personal reflective essay for a final exam, a well-crafted assignment to function as a culmination of the course. Design plagiarism-proof writing prompts, so goes the notion, as the antidote to AI infection. But he was faced with an essay that he believed was AI generated and asked me to read it.
Before I read the essay, the professor input the essay into Turnitin. Within seconds Turnitin had tagged segments of bot-produced text and reported a finding of 82% artificial content. Never having used Turnitin in any form in my own teaching, I was eager to pit myself against this bot named Turnitin. Of course, I would be the gold standard in my own estimation.
I read maybe two sentences before I had seen enough to agree. The essay read as if I were reading GPT3.5, the free version, the version Smith4 recommends users to use when traffic is heavy, the version that loses track of conversations, that transported Jane Eyre into Act II of Hamlet, the great confabulator. I didn’t read the whole essay. I couldn’t. Unless I’m going to publish a bot conversation, I rarely read whole sections of output—I sample and do a cost-benefit analysis much like decisions about what to click on, investing time in a Google search. People don’t get what the bot is doing…
The Professor’s Dilemma. George Bernard Shaw could have scripted a deliciously funny and ironic play from the moral elements of this situation: “Life does not cease to be funny when people die any more than it ceases to be serious when people laugh,” sayeth Shaw’s doctor. This professor was caught between a rock and a hard place though he was still able to laugh through his dejection. On one hand, he was certain the paper had been plagiarized. On the other, he had no acceptable evidence. The university no longer accepts Turnitin findings in academic indictments for plagiarism.
My advice? Have a rational conversation with the student. Ask the student to describe the writing process they used, what the essay means to them, how it connects to the work in the course, what they are most proud of in it as a writing sample. Run the essay through Turnitin and ask the student for comment. Since the essay does not respond to the topic (the essay is an abstract discussion of course content, not a personal reflection), the grade should reflect the facts. She could contest the grade.
*****
AI and traditional education are at loggerheads, as near as I can see. The tables are turned from the emergence of the printing press and the coming of Artificial Intelligence. Following the medieval period the masses were hungry for what the printing press could do for them. The publication of old books and book fragments consumed the printing business. And then came rebirth. Following the announcement of natural language algorithms in November, 2022, the public was in no mood for machines that could rot the minds of children, steal good paying jobs, and turn college students into imposters.
Like all powerful tools—cranes with wrecking balls, guns, bulldozers, chainsaws—AI is not a toy. Using it in complete ignorance of its functions and applications is dangerous. For this reason it’s perhaps a very good thing that public education has self-consciously ignored it. Because of its awesome production of the sine qua non of human intelligence, a written story or essay, because on close inspection those stories and essays crumble and crack like balsa wood or styrofoam, AI has exposed the dehumanizing consequences of the hidden machinery of the factory model of schools.
How are committee written textbooks—those committees that will undoubtedly write future textbooks using AI—different from texts generated by bots from the perspective of a student? The Knight of the Right Answer lives in a castle inside the Four Walls of the Text in both cases. Students relinquish all rights to epistemological autonomy and ownership of knowledge once they cross the threshold of the classroom.
LLMs will pose a threat in classrooms so long as teachers misunderstand what bots are doing when they “read” and “write.” They do not read nor do they write in any way recognizable as human. Students have no chance of grasping the nature of a command stance when working with a bot and the obligation to “do their own work” if teachers do not make clear that bots are technical tools that are powerful when well-used and dangerous when used carelessly or thoughtlessly.
Donald Graves wrote about writer’s welfare in 1976. Schools have a long history of taking charge of reading and writing events in a zealous attempt to prevent student failure. Academic writing pedagogy has an unfortunate bias valuing prompted writing tasks, formalized writing process protocols, and examiner audiences. Students experience a steady diet of composing pieces of writing they care little about and feel no ownership of. In an effort to prepare students for more complex writing assignments of the same ilk, students are groomed subliminally for mild to moderate plagiarism.
Students need to understand huge differences in paradigms: Human literacy differs categorically in ways I’ll let the bot explain below. Educators need to use the bot themselves often in the context of real academic work as educators to see through the mechanistic, sterile text bots produce and to appreciate its incredible affordances. You can’t mistake it for human once you understand its tells. Bots distinctly do not write anything at all. And they cannot read a word.
*****
Part 1
User: Explain the most significant differences between the experience of reading a Shakespearean play between a human reader and an LLM.
Smith4: The experience of reading a Shakespearean play differs significantly between a human reader and a Language Model (LM), particularly in the context of comprehension, interpretation, and emotional engagement. Here are the most significant differences:
1. Comprehension:
- Human Reader: Engages in nuanced comprehension by drawing on cultural, historical, and linguistic context to interpret language, wordplay, and symbolism within the play.
- LM: Relies on pattern recognition, statistical analysis, and pre-trained language data to interpret the text, often lacking deep understanding of the contextual intricacies that enrich literary works.
2. Interpretation:
- Human Reader: Applies personal experiences, empathy, and emotional intelligence to empathize with characters, discern underlying themes, and appreciate the play's moral and philosophical implications.
- LM: Processes text based on statistical patterns and linguistic structures, often lacking the ability to empathize or interpret moral nuances, subtext, or emotional depth in characters' interactions.
3. Emotional Engagement:
- Human Reader: Forms emotional connections with characters, experiences empathy, and resonates with the play's emotional highs and lows in a deeply personal manner.
- LM: Lacks emotional awareness and personal resonance, processing text devoid of emotional engagement or empathetic response to the characters' experiences and dilemmas.
These differences underscore the distinct nature of literary engagement between human readers and LMs when reading Shakespearean plays, emphasizing the nuanced, emotionally rich, and culturally embedded nature of human interpretation compared to the statistical, contextually limited processing of language by LMs.
*****
Part 2
User: Explain the differences between the way a human reader would read a report of an experiment in a medical journal and the way an LLM would read the same article.
Smith4: The differences between how a human reader and a Language Model (LM) would approach reading a report of an experiment in a medical journal are rooted in the nuanced comprehension, contextual understanding, interpretive capabilities, and cognitive engagement that distinguish human reading from the statistical language processing performed by LMs.
Here are the key differences:
1. Contextual Understanding:
- Human Reader: Leverages background knowledge, scientific literacy, and contextual understanding to interpret complex medical terminology, experimental methodologies, and scientific findings within the broader landscape of medical research.
- LM: Rely on statistical patterns, pre-trained language data, and contextual limitations to process the text, lacking deep contextual understanding of specialized medical concepts and the broader scientific implications of the research.
2. Cognitive Engagement:
- Human Reader: Actively engages in critical thinking, skepticism, and analytical evaluation of the experimental design, methodology, results, and conclusions, often drawing upon personal experiences, prior knowledge, and scientific understanding to assess the rigor and validity of the research.
- LM: Processes the text based on statistical language patterns, lacking the cognitive engagement, critical appraisal, and scientific reasoning that human readers employ to evaluate the experimental procedures, results, and implications of the research.
3. Interpretation and Inference:
- Human Reader: Draws inferences, interprets statistical data, and synthesizes complex information to discern the significance, limitations, and potential impact of the research on medical practice, patient care, and scientific advancement.
- LM: Focuses on pattern recognition and statistical analysis, often lacking the ability to infer, contextualize, or interpret the broader implications and real-world applications of the experimental findings within the field of medicine/
These differences highlight the nuanced, interpretive, and cognitively engaging nature of human reading, emphasizing the rich, contextual understanding, and critical thinking applied by human readers when engaging with scientific reports. In contrast, LMs rely on statistical processing and contextual limitations, presenting a more analytical, contextually limited approach to language understanding and interpretation.
*****
Part 3
User: Explain the differences in the way a human would write a fictional short story and the way an LLM would write one.
The differences between how a human writer and a Language Model (LM) would approach writing a fictional short story are rooted in the creative process, emotional engagement, contextual understanding, and interpretive capabilities inherent to human creativity, contrasting with the statistical language generation and contextually limited nature of LM-based story creation.
Here are the key differences:
1. Creativity and Originality:
- Human Writer: Embarks on a creative journey that draws upon personal experiences, emotions, and imagination to craft characters, settings, and plotlines that reflect individuality, originality, and artistic expression.
- LM: Relies on statistical language generation, pattern recognition, and pre-trained data to produce text that mimics language structures and content based on learned linguistic patterns, often lacking the capacity for true originality and personal artistic expression.
2. Emotional Engagement:
- Human Writer: Infuses the narrative with emotional depth, empathy, and personal resonance, using storytelling to evoke, explore, and express complex human emotions, experiences, and interpersonal relationships.
- LM: Lacks emotional understanding, personal resonance, and authentic empathy, often generating text without the genuine emotional engagement, depth, or nuanced portrayal of human experiences and feelings.
3. Contextual Understanding:
- Human Writer: Leverages cultural, historical, and linguistic knowledge to create narratives rich with context, symbolism, and depth, grounding the story in broader human experiences, societal themes, and historical intricacies.
- LM: Processes text based on statistical patterns and language structures, lacking the deep contextual understanding, cultural insight, and symbolic richness that human writers infuse into their narratives, often resulting in contextually limited storytelling.
4. Interpretation and Subtext:
- Human Writer: Embeds layers of meaning, subtext, and thematic significance within the narrative, allowing readers to engage with thought-provoking ideas, moral dilemmas, and philosophical inquiries that transcend the surface plot.
- LM: Lacks the interpretive capability to embed complex subtext, thematic depth, or profound philosophical inquiries within the narrative, often resulting in a more straightforward, contextually limited storytelling approach.
These differences underscore the nuanced, emotionally rich, and contextually embedded nature of human storytelling, emphasizing the individuality, creativity, and emotional resonance that human writers infuse into their storytelling, compared to the more analytical, contextually limited nature of language generation by LMs.
*****
Part 4
User: Explain the differences between how a human writer would write a report of an experiment to be published in a medical journal and how an AI would go about writing this report.
The differences between how a human writer and an AI would approach writing a report of an experiment for publication in a medical journal are grounded in the nuanced understanding, interpretive capabilities, scientific reasoning, and contextual depth inherent in human scientific writing, contrasting with the statistical language generation and contextually limited nature of AI-based report writing.
Here are the key differences:
1. Scientific Reasoning and Interpretation:
- Human Writer: Engages in critical analysis of experimental design, methodology, and results, employing scientific reasoning, inference, and interpretive skill to communicate the significance, limitations, and implications of the research within the broader scientific landscape.
- AI: Draws upon statistical language patterns, pre-trained data, and language generation models to produce text that lacks the nuanced scientific reasoning, interpretive depth, and critical appraisal employed by human writers in communicating experimental findings and their implications.
2. Contextual Understanding:
- Human Writer: Leverages domain expertise, scientific literacy, and contextual understanding to articulate complex medical concepts, experimental procedures, statistical analyses, and scientific implications within the broader framework of medical research and clinical practice.
- AI: Relies on statistical patterns and contextually limited language generation, often lacking the deep contextual understanding, domain expertise, and scientific literacy required to comprehensively articulate, analyze, and interpret the experimental processes and results within the medical research domain.
3. Ethical and Societal Implications:
- Human Writer: Considers ethical considerations, societal impact, and patient relevance when presenting the experimental findings, often addressing potential applications, ethical dilemmas, and societal implications of the research within the medical and healthcare landscape.
- AI: Lacks the ethical reasoning, societal insight, and patient-centered perspective that human writers consider when addressing the ethical and societal implications of the experimental findings, often resulting in a more analytically focused, contextually limited approach to text generation.
4. Communication of Complex Scientific Concepts:
- Human Writer: Communicates complex scientific concepts, statistical analyses, and methodological intricacies using language that is accessible, informative, and contextually rich, catering to the diverse readership of medical journals while maintaining scientific rigor.
- AI: Struggles to convey complex scientific concepts, methodological intricacies, and statistical analyses with the same level of accessible, informative, and contextually rich language that human writers employ, often resulting in a more technical, less reader-friendly presentation of the experimental details.
These differences underscore the nuanced, contextually rich, and scientifically informed nature of human scientific writing, emphasizing the critical thinking, interpretive depth, and ethical consideration that human writers integrate into reporting experimental findings for publication in medical journals, compared to the more analytical, contextually limited nature of language generation by AIs.
This is very good. Very useful. We are heading in a very similar direction. I like the idea of K-12 as self-consciously ignoring AI. That rings true. It also calls folks like me to a different kind of attention work.