We're living through an extraordinary moment in education. AI tools have become part of the academic workflow vis a vis how students think, write, and create. To better understand these changes, I’ve been doing what I’ve done for half of a lifetime on problems of far less complexity like, say, learning to read and write: I turn to high-quality research. I’m a great believer in scientific research.
I want to explore two fascinating studies about human-AI collaboration and their implications for teaching in this discussion. But I also have a second goal: refreshing our collective memories about how to read research critically in an era of information bombardment.
Too often, I see knee-jerk reactions to studies or casual dismissals prefaced with "Well, I didn't read the whole thing, but it seems like..." This kind of superficial engagement is intellectually perilous in these Trumpian times, especially when we're making decisions that affect the future of our children.
I research; therefore, I know (ground-breaking stuff)
Let's start with the gold standard: Lee and Chung's 2024 study on ChatGPT's impact on creativity, published in Nature Human Behaviour. This isn't a journal you’re likely to find in a dentist’s office. With an impact factor of 15.5, it ranks in the 98th percentile for experimental psychology. A high impact factor suggests that research in the journal is frequently cited by other scholars. Journals with high impact factors are highly selective, accepting only top-tier research adjudged worthy of publication after rigorous peer review.
[Brute Fact: An article I submitted for publication in a top-tier journal went through five rounds of peer review over eighteen months before it was rejected. I learned more from that feedback than I ever knew was in the data. It was a fascinating experience.]
Nature Human Behaviour is a Q1 journal, placing it in the top quartile of experimental psychology publications worldwide. An impact factor over 15 is exceptionally rare, found only among a handful of the world's most competitive titles. Peer review suggests substantial confidence in methodological rigor and reliability. The Lee and Chung paper's rapid accumulation of citations (36) signals strong immediate impact within the academic community, establishing it as foundational research.
Why does this matter for educators trying to understand AI's role in student learning and creative processes? Because peer review at this level is brutal. Surviving it is miraculous. Multiple experts scrutinize every claim, every statistical test, every interpretation. The authors ran five separate experiments, demonstrating that ChatGPT enhances incremental creativity, though not radical innovation. When a study passes through this gauntlet and appears in such a prestigious venue, we can trust its core findings.
I research; therefore, I think (tantalizing stuff)
Let’s examine the Brain-to-AI study investigating neural connectivity during AI-assisted writing, which represents a different tier of research credibility. This MIT Media Lab study examines the neural and behavioral effects of using AI (specifically large language models) for essay writing, using EEG monitoring and a participant pool of 54 university students.
The research is IRB approved and available in full at the MIT Media Lab’s official publication portal. This fascinating exploration of how our brains interact with generative AI has MIT's Institutional Review Board approval. But what does that actually mean?
While the Media Lab maintains high academic standards with many projects subject to MIT’s Institutional Review Board (IRB) for ethical review, especially studies involving human subjects and sensitive data like EEG, it does not adhere strictly to conventional academic structures, sometimes embracing more experimental directions. Most of these self-published studies (including the Brain-to-AI/ChatGPT EEG study) are not peer-reviewed at the time of first release.
The Media Lab’s portal makes early-stage research public, which is valuable for transparency and fostering discussion, but is explicitly labeled as preliminary. Some projects begin the external peer review process after online release, but this can take months or longer, and many portal-hosted pieces are never reviewed by external experts. I especially appreciate the effort to discourage media outlets from making a public spectacle of the results. Witness:
Without peer review, we're reading a first draft of science—potentially groundbreaking, certainly intriguing, but fundamentally unverified by field experts. The sophisticated analyses of alpha, beta, theta, and delta brain waves in this study haven't been checked by neuroscientists. The interpretations linking increased connectivity to AI rewriting tasks remain unvalidated by cognitive psychology experts. Statistical choices that could make or break the findings haven't faced scrutiny from methodological specialists.
This doesn't mean we should dismiss the study—it means we read it differently, with productive skepticism and awareness. We treat it as hypothesis-generating rather than conclusion-establishing, recognizing that peer review might substantially alter its claims.
Where Lee and Chung's peer-reviewed work offers trustworthy findings about AI enhancing incremental creativity, this IRB-approved study offers intriguing possibilities about neural engagement. Understanding this distinction helps us navigate research responsibly, avoiding both cynical dismissal and naive acceptance of preliminary findings.
Core Findings and Interpretations: What These Studies Tell Us About Human-AI Collaboration
The Creativity Enhancement Study (Lee & Chung, 2024)
Lee and Chung's peer-reviewed research reveals that ChatGPT fundamentally changes how humans approach creative problem-solving, but not in the ways we might expect. Across five experiments involving tasks from gift selection to innovative design challenges, participants using ChatGPT consistently generated more creative solutions than those using traditional web search or no tools at all.
The key finding centers on the type of creativity enhanced: ChatGPT excels at helping users generate incrementally creative ideas—those that build thoughtfully on existing concepts—rather than radically novel breakthroughs. The authors attribute this to ChatGPT's ability to synthesize disparate concepts into coherent, well-articulated suggestions. When asked to design an innovative dining table, for instance, ChatGPT users produced designs that cleverly combined existing furniture concepts in unexpected ways, rather than inventing entirely new paradigms for dining.
This pattern held regardless of task constraints or emotional complexity, suggesting that AI's creative assistance operates through gradual conceptual synthesis rather than aha! moments of totally new thoughts. The implications for education are profound: AI tools may best serve students as exploratory partners that help refine and articulate ideas rather than as sources of revolutionary thinking.
The Neural Connectivity Study (Brain-to-AI, MIT Media Lab)
The Brain-to-AI study offers tantalizing preliminary evidence about how our brains physically respond to AI collaboration. Using EEG monitoring across 54 university students, researchers compared neural activity patterns among three groups: those using ChatGPT exclusively, those using traditional search engines, and those relying solely on their own cognition.
Essentially, the brain has a certain amount of bandwidth much of which goes underused. [Note well: Here is where I would be relying on the neuroscientist peer reviewers; I can’t say I have confidence in my understanding or ever will have confidence, but here goes.] The more active and connected the waves are, the more metacognitively epistemic the brain is behaving.
The most striking finding emerged in Session 4, where students who initially wrote without AI assistance and then revised using ChatGPT showed dramatically higher neural connectivity across all measured frequency bands (alpha, beta, theta, and delta). This suggests that using AI as a revision tool rather than a first-draft generator engages more extensive brain networks and potentially deeper cognitive processing.
Students who began with AI assistance showed less coordinated neural effort in subsequent sessions, hinting at what the authors term "neural efficiency adaptation,” essentially, the brain may be doing less work when AI handles initial creation. The study also found that AI-first users struggled more with ownership perception and had significantly reduced ability to quote or recall their own essays.
Intriguing, to say the least.
Reading Research Like It Matters
These two studies offer a great opportunity to think about why research quality matters and why we must read with both open minds and sharp eyes. The peer-reviewed creativity study gives us solid ground to stand on. AI enhances incremental creativity through conceptual synthesis. We can build pedagogical action research studies as teachers on this foundation because the finding survived brutal expert scrutiny.
The neural connectivity study tantalizes with possibilities—maybe AI rewiring is most beneficial for revision, maybe starting with our own thoughts preserves cognitive engagement—but we hold these insights lightly, knowing they await validation. We trust our instincts and experiences more than an academic researcher’s hunch.
Here's what reading these studies side-by-side teaches us. In our rush to understand AI's educational impact, we cannot afford to treat all research as equal. A preprint isn't a peer-reviewed article. IRB approval isn't scientific validation. Media Lab transparency isn't Nature journal rigor. These distinctions aren't academic snobbery. They're the difference between building our teaching on bedrock or sand.
The most dangerous response to research isn't skepticism or even misunderstanding. It’s false equivalence. This is why we have the phonics wars. When we treat preliminary findings as established fact, or dismiss rigorous studies because they challenge our assumptions, or embrace problematic studies because of our bias, we fail our students.
The creativity study shows AI can be a powerful educational tool when used thoughtfully. The neural study suggests fascinating possibilities about how that tool affects our brains. One finding we can act on today; the other we watch with interest as it develops.
In this era of AI transformation, we need research literacy more than ever. Not because we should blindly trust academics, but because distinguishing solid evidence from intriguing speculation helps us make better decisions for our students. The next time someone shares a study about AI in education—and they will, constantly—ask yourself: Has this been peer-reviewed? What's the venue's reputation? Is this preliminary or established?
Read the whole thing. Read it critically. Our students deserve nothing less than evidence-based practice built on the best available research. That's how we navigate this brave new world: one carefully evaluated study at a time.
A good recipe for cooking AI.
another gem...keep publishing these!