Blink your eyes.
Blinking is the unsung metronome of our existence—a physiological tick, a reflexive curtain that opens and closes without volition or deliberation. It’s the jerky movement of the second hand of an old-fashioned watch, marking time without measuring the moral weight of its service. Unlike a moral act that engages conscious choice, reflective judgment, and the endorsement of ethical principles, blinking is like the silent spaces between words in the prose of our daily lives.
Its neurological underpinnings immunize it to the scrutiny of conscience. At its basic level, sensory receptors in the cornea detect irritants or dryness, sending impulses via the ophthalmic branch of the trigeminal nerve (cranial nerve V) to the brainstem’s spinal trigeminal nucleus. From there, a rapid reflex arc transmits signals to the facial motor nucleus in the pons, which commands the orbicularis oculi muscle to contract—producing a blink.
Now blink one eye.
Notice how you must think about it to wink. A wink is not just one blink—it can be the eye’s own deliberate punctuation mark in the discourse of human interaction. Unlike the unthinking, reflexive rhythm of a biological blink, winking is an act of intentional ocular rhetoric: you close one eye with purpose, as if to say, “I choose this moment to convey something secret, playful, or flirtatious.” In the state of choice, the wink becomes a discursive act laden with meaning, entering the sphere of moral consideration by virtue of its communicative intent. It’s the difference between the brain’s autopilot and a self-conscious signal—a refined, nuanced gesture that speaks and bears the ethical weight of its context.
*****
Not every activity is a moral activity. A good working definition of a moral activity is “an activity consciously chosen or avoided with an awareness of right and wrong.” Many actions we perform are either automatic or involuntary and don’t involve any ethical deliberation. Actions such as breathing, blinking, or digesting food occur automatically without any moral choice; even so, choosing to inhale an illegal substance, to wink at a person in a bar, or to urinate on the street can quickly become matters of right or wrong. Routine activities like tying your shoes, turning on a light switch, or even sleeping are usually done without deliberate moral reflection. The same is true of spelling and reading words. We’re so automatic we can’t not read familiar words even if we try.
Many everyday actions (e.g., checking the weather report or reading a novel for pleasure) are usually morally neutral—they don’t express a commitment to ethical values or require moral judgment of right or wrong, more right or more wrong. Checking the weather report, however, becomes problematic if you are a General in the situation room deliberating the precise moment to drop a bomb on an enemy target. In that context, what was once an innocuous act transmutes into a critical decision loaded with moral implications, especially for the warriors you might send on a dangerous mission, as your casual query about the forecast directly informs life-and-death strategic choices.
Practically speaking, almost every human activity involving consequences for other humans carries the potential of being a moral activity, requiring ethical reasoning. But when it comes to the activity of writing, knowing when the potential of writing suddenly becomes a moral question can be a challenge.
*****
The anthropologist Brian Street developed a theory of literacy that helps clarify writing in the context of moral reasoning. Street argued that reading and writing alike can be understood as autonomous, mechanical processes, a common-sense sort of understanding. The autonomous model treats literacy as a set of technical, cognitive skills—decoding, comprehension, and composition—that operate independently of their social context. In this view, literacy is neutral; writing is merely the mechanical application of learned skills like a craftsperson using a tool to fulfill a preordained task. Literacy is automatic, the spelling and the sounding out of words; the mechanical tracking of lines in a text; the algorithmic mental action of comprehending or composing.
In contrast, the ideological model contends that literacy is inseparable from the cultural, political, and social milieu in which it is practiced. Writing is ideological because it is undertaken within a value-laden human-to-human context much like a wink. Writing, as a communicative act, is inherently moral because it has embedded within it the writer’s intentions that shape meaning for the reader. Those intentions built into the language of the text can be for good or for evil. Writing is not a sterile, technical process but a discursive act imbued with ethical significance.
The choices made in word selection, tone, and structure exhibit a moral calculus in the design that reflects and can influence societal values. The text takes on the writer’s agency and invokes moral responsibility. Even when writing appears to lean toward autonomy—when, for instance, the decisions regarding a text’s meaning are predetermined, mechanical, or dictated by external frameworks (e.g., a parking ticket)—the interplay between the writer’s internal convictions and intentions and the broader cultural context requires that literacy remains, at its core, a moral enterprise. We witnessed this recently when, rather than write a text dismissing legal charges of accepting bribes against the mayor of New York, EricAdams, eight attorneys resigned from the Department of Justice.
*****
Consider writing instruction under the autonomous model as a kind of bureaucratic choreography—a rigid dance where students are expected to perform preordained steps rather than improvise their own moral movement. In this regime, writing is reduced to a technical exercise: organize your sentences, align your paragraphs, and choose words that fit the teacher’s approved lexicon. The student becomes a well-trained automaton, meticulously assembling ideas to mirror an external blueprint, while their inner voice—rich with ethical nuance and personal conviction—remains muzzled.
This approach usurps moral engagement by stripping writing of its potential as a discursive act imbued with the writer’s unique values and social context. Instead of writing as an intimate conversation with oneself and one’s community—a process that might reveal the poet’s hidden passions or the philosopher’s deep-seated moral dilemmas—the act becomes a sterile transaction, where the only currency is conformity. The classroom, in effect, transforms into a moral factory, where the product is not a genuine articulation of human experience, but a sanitized, teacher-approved artifact.
*****
Enter LLMs—the algorithmic scribes of our age—whose efficiency disrupts this bureaucratic choreography. These large language models, capable of producing perfect teacher-approved prose at the flick of a prompt, risk further slowing the moral pulse of writing. Instead of fostering a conversation with oneself and one’s community, students might offload their creative agency, which is scarcely required in autonomous assignments, to these digital automatons. In this extreme scenario, writing becomes not only a mechanical exercise in compliance but also a process outsourced to machines. The classroom, already transformed into a factory by the rigid demands of the autonomous model, now faces the additional threat of digital homogenization—a future where writing is manipulating algorithmic outputs, and the human writer, with all their ethical nuance and passionate individuality, never surfaces above the hum of code.
Enter LLMs into the classroom as the vanguard of an ideological model of literacy—a paradigm in which writing transcends technicality to become a dynamic, contextually embedded conversation. In this model, LLMs are not prompted to produce finished text in accordance with the teacher’s intentions; they perform at the direction of the writer, assisting the writer in ways that only LLMs can assist given their wide access to information. They are a library you can talk with. Unlike the rigid choreography of the autonomous model, where students follow preordained steps, engaging with LLMs invites learners to interrogate, remix, and contest the digital output.
The machine’s sophisticated algorithms can expose students to a plurality of voices, challenging them to decipher, critique, and refine layered narratives as point and counterpoint—to do moral readings of bot output. When used morally under the regulation of the individual writer’s moral compass, LLMs can become catalysts for moral and ethical reflection—tools that supply information and spark debates about the values underlying language, urging students to reclaim their creative and moral agency and reassert the human writer’s irreplaceable role in articulating what it means to be part of a living, diverse community.
*****
Alex enters his first year of university buoyed by an impressive high-school track record. Under the Common Core regime, his writing assignments were meticulously structured: teachers would provide the literary excerpts, highlight the relevant passages, and often supply guiding questions that all but spelled out the core ideas. In English class, every essay prompt was tied to a snippet of a larger text—perhaps a few pages from The Scarlet Letter or a single scene from Macbeth—ensuring that Alex rarely needed to engage with an entire novel or play on his own. He excelled in annotating these “bite-size” readings, drawing arrows and scribbling margin notes to show he was “interacting with the text.” This approach yielded a consistent flow of polished paragraphs: Alex learned to harvest quotes from the teacher-approved source material, slot them into a neat introduction-body-conclusion format, and meet the assignment’s word count without too much fuss.
By senior year, Alex had become a master at playing this game. He knew exactly how to identify the teacher’s favored theme—be it irony, symbolism, or conflict—and craft a paragraph that demonstrated textual evidence of that theme. The annotations functioned like a blueprint, allowing him to fill in the rhetorical blanks with minimal independent research or broader reading. In short, he performed well on precisely the tasks his high school classes rewarded, securing top marks in the process.
Now, however, Alex is in a freshman writing seminar at a university where the rules of engagement look quite different. No longer are the readings conveniently excerpted; no longer does the professor supply pre-packaged analysis prompts. Instead, Alex has to buy a 300-page anthology of essays, poems, and academic articles. He faces open-ended research questions that require him to explore a topic’s context, debate multiple viewpoints, and synthesize sources far beyond a single excerpt. In essence, he must pivot from the tightly choreographed “fill-the-space” writing to an approach that demands, deeper reading, and original insight—none of which was central to the writing tasks he aced in high school. Here is a major writing assignment from his first-year writing seminar.
First Writing Assignment: Digital Media & Interpersonal Relationships
Assignment Prompt:
Develop an original thesis on the influence of digital media on interpersonal relationships. Your essay should support your argument with critical analysis and examples drawn from current technological trends. Document all sources. Do not use AI for this assignment.
Parameters:
• Breadth of Sources:
• Conduct research beyond the course readings.
• Include at least three scholarly articles or reputable news sources on digital media.
• Original Argument:
• Craft a unique perspective that goes beyond summarizing existing viewpoints.
• Clarify how digital platforms shape, challenge, or redefine human connection.
• Evidence & Analysis:
• Blend personal observations (e.g., experiences with social apps) with hard data (e.g., usage statistics, demographic trends, or sociological studies).
• Length & Format:
• Your essay should be 2,500–3,000 words.
• Use proper citations in MLA, APA, or Chicago style.
• Ethical Engagement:
• Consider moral or social implications such as privacy, authenticity, or equity in digital communication.
Guiding Questions:
• In what ways do digital platforms transform how people form and maintain friendships or romantic relationships?
• How do issues like privacy, digital identity, and algorithmic bias shape these interactions?
• What are the potential benefits and drawbacks of living in an always-connected digital ecosystem?
• Are certain demographic groups disproportionately affected—either positively or negatively—by digital communication tools?
Instructor’s Note:
You may NOT use AI in any way to do this assignment. AI reduces the quality of your writing and your learning. You will fail this assignment and have no opportunity to redo it. Instead of merely rehashing provided materials as you experienced in high school, you are expected to initiate your own line of inquiry and select your own sources. While class discussions and course materials can inform your work, the bulk of your essay should reflect independent research and a self-directed argument. Your goal is to articulate a clear stance that contributes something new or provocative to the conversation about digital media and relationships.
Good luck and remember: your unique voice and critical insight are at the heart of this task!
*****
Narrative 1: Alex as a Digital Immigrant
When confronted with a sprawling 300-page anthology and an open-ended research prompt on digital media’s impact on interpersonal relationships, Alex felt overwhelmed. He tried desperately to get a handle on what he might write, but he couldn’t.
He didn’t dare use AI. In high school he used AI once to pump out an essay for him, but he felt guilty and worried constantly about being discovered as a cheater. He didn’t know anyone well enough in the writing class to try to collaborate on understanding how to find a path into the writing.
In a desperate bid to fill the word count, he resorted to hastily cutting and pasting passages from the provided textbook and a few online sources. What began as a patchwork of external ideas morphed into a paper that was little more than a collage of borrowed words.
But he was proud of it because he worked so hard. When his essay was returned, marked up with notes about unoriginal content and plagiarism, Alex realized that his attempt to mimic high school precision had backfired—his work, stripped of genuine insight, had become an ethical and academic disaster.
Narrative 2: Alex as a Digital Native
Determined to meet the challenge head-on, Alex weighed the morality of using AI to scaffold his response to this writing task. It wasn’t an easy decision. If he used it, he would be disobeying an order from his teacher, not something he was accustomed to doing. But in the end he decided that he knew more about the ethical use of AI than the course instructor. He had taken a semester-long course in his senior year of high school titled “AI Theory and Writing.” By the end of the course he was asked to write his position on the moral use of LLMs. He had no qualms about his integrity as a writer.
Alex decided to risk using a large language model as a brainstorming and editing tool. He had internalized the motto of the senior year course: Be the captain of your own intellectual ship. He began by having an exploratory discussion with Claude about the assignment itself. He explained his confused situation and asked Claude not to tell him what to do, but to help him explore what a reasonable approach to the assignment would be. He had no one to turn to.
At first Claude seemed a bit confused. Alex provided more context for the course, and together they homed in on the writing scenario. After talking about specific lines in the assignment that seemed interesting, Claude landed on one statement in the assignment: “Clarify how digital platforms shape, challenge, or redefine human connection.” The thought popped into Alex’s head: “I’m in the middle of a good example how digital tools challenge human connection. My human connection with my teacher is being shaped by the fact that I’m forbidden to use AI. I’m afraid to even talk to him about the assignment.” From there he and Claude put the assignment under a microscope..
Alex closed the Claude app and started drafting some thesis statements by handwriting them, an approach he learned to value in the AI Theory course. He shared them with Claude in a new conversation because he wanted Claude to look at the work with fresh eyes, not full of the previous chat. They discussed thesis statements. Claude would offer one, they would discuss it, and then Alex would offer one. Sifting through these collaborative statements, Alex selected those that resonated with his own observations and further developed them with independent research using the class materials. After doing some reading in original sources (and the class materials), he landed on a thesis statement that expressed his perspective and experience perfectly. He also had collected annotations from the source materials, using strategies he had learned in high school.
“Digital tools like LLMs in recent years have had a major impact on relationships between teacher and students in classrooms with consequences for all of society.”
He used his spiral-bound journal to mind map a semantic vision for the essay that he would refine as he worked on a first draft.. He had a lot of space to work with—up to 3,000 words. As his map filled, he experimented with potential ideas by writing a topic sentence and prompting Claude to draft two or three alternate takes on the idea. Then he would decide whether to keep the idea on the map or set it aside. Rather than copying passages, he used the bot’s output as a springboard for his own original analysis.
Throughout the process, Alex meticulously revised and rewrote any bot-generated text that he decided worked well, infusing it with his personal voice, critical insights, and moral reflections on digital ethics involving other people. When he submitted his final essay, it was not only a well-structured, insightful argument, but also a testament to the synergy between human creativity and technological assistance—a genuine work of intellectual labor that respected academic integrity by overruling what he had decided was an unfair rule in his case while harnessing the power of AI for moral and scholarly advancement. His teacher did not call him on the carpet for plagiarism, and he knew in his heart that he had not plagiarized. The teacher loved the paper!
*****
The link between writing and morality grows more intricate as technology reshapes our relationship with text. Like the difference between a blink and a wink, the line between mechanical production and meaningful creation often lies in our conscious intention, engagement, and sense of agency. As we navigate this evolving landscape, perhaps the key questions are not whether we use tools like AI, but how their use impacts our agency, voice, and moral responsibility as writers.
What makes writing truly "ours?” When is efficient production more appropriate than authentic individual expression? And in an age where text can be generated at scale, how do we preserve the essential human elements that give writing its moral weight? The answers may lie not in rigid prescriptions, but in thoughtful reflection on how we choose to engage with the tools at our disposal and what those choices reveal about our values as writers and thinkers. Courses that provide students with sustained opportunity to develop their own theoretical and moral approaches to AI are crucial, in my view. Of course, this path won’t cure the world of plagiarism; but it may help young people move into the future with a deeper understanding of what is at stake. Wishing AI would go away postpones the development of courses to teach about and practice AI in a mentored, thoughtful, ethical context, a course that both teachers and students need to take.
.
I like this! I’m also working on a piece about ethics, writing, and AI, so this timely for me. My first reaction to that sample expository writing assignment - the one with the repeated instructions not to use AI - is that it’s very structured and has a lot of sections and directions. No wonder poor Alex freaked out (I would freak out). It’s a good example of the kind of assignments that make using a bot incredibly seductive. I have more to say about how and when AI use could be a benefit to moral engagement with the writing process, but I want to read through your narratives more carefully.