Brainstorming. Outlining. Drafting. Revising. Editing. These have all been standard rituals of the writing mind until artificial intelligence shattered them all.
In March 2024, researchers published preliminary findings that demolished everything we thought we knew about creative collaboration. Sebastian G. Bouschery, Vera Blazevic, and Frank T. Piller conducted an experiment with 168 participants. They pitted four brainstorming approaches against each other: solo workers, traditional groups, human-AI partnerships, and machines working alone.1
The results? Disequilibrating to say the least.
Human-AI hybrid teams embarrassed the competition. They generated 172% more creative ideas than solo workers. Traditional brainstorming groups? Crushed by 201%. When machines worked independently, they outperformed individual humans in both quantity and creativity.
These findings go beyond productivity metrics. The demolition of creative mythology has begun.
Artificial intelligence redefines our entire vocabulary of writing. These were comfortable process words. They were our professional DNA, our craft identity.
Critical thinking? Once, this meant something uniquely human: the ability to analyze and reason. Now we watch machines perform these same logical operations with frightening sophistication. The question haunts us: What makes human reasoning special anymore?
Drafting used to mean wrestling ideas from the void. That sacred first attempt at creation. Now prompting algorithms for initial content, then sculpting what emerges, is the new normal. The blank page and the subsequent creative terror has been replaced by overwhelming abundance. Infinite possibility becomes the new paralysis.
Revising was about seeing your work with fresh eyes. Making better through careful iteration. But what is revision when machines generate dozens of alternatives instantly? Are we revising? Or curating from endless digital possibilities?
Editing was precision craft. Cutting excess, clarifying meaning, perfecting flow. Now algorithms perform mechanical editing tasks flawlessly. Human editing evolves into something more elusive: editorial judgment, aesthetic choice, strategic vision.
Even authorship has become a labyrinth without exit.
Ideas emerge from human-AI collaboration in ways that defy attribution. The machine suggests the perfect phrase, restructures our arguments, shapes our voice through algorithmic suggestions. “Who wrote what?” becomes not just unanswerable but irrelevant.
The resistance to these changes transcends job security and reaches well beyond technological skepticism.
Existential vertigo strikes at the core. Plate tectonics at work (Malcolm, personal communication, July, 2025).
When the ground beneath your feet turns to quicksand, panic follows. The natural response? Grabbing for something solid, even if it is an illusion of the past.
Writers, educators, and creators built professional identities around these sacred processes. “I draft, therefore I am.” These weren’t skills. They were core beliefs about human uniqueness. They were what made our work valuable and irreplaceable. A part of ourselves is under siege.
The brainstorming study attacks one of our most cherished creative myths: that collaboration with humans can be a religious experience. But the data tells a different story. Human-AI teams crushing traditional groups reveals an identity crisis in motion.
The hue and cry makes perfect sense. People aren’t defending their methods. They’re defending their understanding of themselves as writers. When someone declares “AI can’t truly be creative,” they’re really asking: “If machines can be creative, then what am I?”
This explains why the resistance burns so fierce and emotional. Reconstituting professional identity on constantly shifting ground becomes the real challenge.
The people adapting fastest share one crucial trait: they’re willing to let go of who they thought they were. They want to discover who they might become in partnership with artificial intelligence.
We’re entering an era where the most creative humans might be those skilled at collaborating with machines. This represents a meta-skill we’re all still learning to develop. The ground moves beneath our feet constantly.
The question becomes how quickly we can learn to walk on unstable terrain.
The sacred rituals of creative work are being demolished and rebuilt into something we don’t yet recognize. What emerges won’t be the comfortable certainties we once knew. Something entirely new will appear if we’re brave enough to release our grip on familiar ground.
The vertigo is real. The ground moves.
But perhaps the most interesting creative work has always happened in the space between solid footing and free fall.
Primary Source URL: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4724068
"We’re entering an era where the most creative humans might be those skilled at collaborating with machines." Well, I understand what you mean here but this has been a phenomenon for a very long time. Setting aside musical instruments as machines, producers have been using digital samplers and sequencing machinery for decades. This AI machine, though, happens to operate in a particularly fluent way.
I recently inquired about the difference between the meaning of "reasoning" in the AI world compared to the human world. The problem here is that too many of the descriptions of AI capabilities have been characterized according to anthropomorphic metaphors simply for convenience when, in fact, they are quite different.
Basis of Reasoning: Humans - Grounded in embodied experience, memory, emotion, perception, social context. AI - based on statistical correlations in training data (textual patterns).
Knowledge Acquisition: Humans - interactive, sensorimotor, social learning, real-time experience; AI - Passive ingestion of large textual datasets (e.g., books, websites, forums).
Understanding: Humans - Semantic, experiential, and situational understanding. AI - Pattern recognition and probabilistic modeling (no true "understanding").
Motivation: Humans - Driven by goals, needs, affect, and context. AI - Has no goals, intentions, or motivation of its own.
Interpretation of Ambiguity: Humans - Informed by culture, psychology, emotional nuance AI - Relies on textual precedent; struggles without clear patterns or external validation.
Novel Inference. Humans - Capable of inventing or imagining based on limited data or abstract principles. AI - Tends to interpolate from training data; struggles to extrapolate without precedent.
Grounding: Humans - Grounded in reality via perception and action. AI - Ungrounded, operates in a symbolic space without sensory input or lived experience.
While these differences are obscured by the fluency of AI output, they are (IMHO) irresolvably different and discrete forms of existence.
Check out "Artificial Intelligence as a New Form of Agency (not Intelligence) and the
Multiple Realisability of Agency Thesis" by Luciano Floridi. It's high-level academia, but it perfectly articulates how AI is not actually intelligent as we understand it in our own terms.
I’d like to work with students to help them see that more is not better. Call me crazy, but I think the human - AI blend has a future which is good for all.