The CCCC: A Powerful Voice in Composition Studies Since 1949
Founded in 1949, the Conference on College Composition and Communication (CCCC) emerged when over 500 instructors gathered in Chicago to address the specific challenges of teaching college freshman English. Initially approved by the National Council of Teachers of English (NCTE) as a three-year experiment, the CCCC quickly proved its value and was permanently established by 1950. From these origins focused on practical pedagogical concerns, the organization has evolved into what it now describes as "the world's largest professional organization for researching and teaching composition." Through its influential journal College Composition and Communication, annual conventions, and position statements, the CCCC has shaped writing instruction across higher education for over seven decades.
Decoding the CCCC 2023 Position Statement: Anxieties in the Age of AI
The 2023 "Principles for the Postsecondary Teaching of Writing" opens with language that reveals a field under siege. The document declares that "foundational assumptions about the teaching of writing, its place in higher education, and its ability to help foster a truly inclusive democratic society are increasingly contested." This framing immediately positions writing educators as defenders of democratic values against unnamed but threatening forces.
The document speaks to multiple audiences simultaneously, but its primary intended readers appear to be those already within the field. While it claims to address "postsecondary teachers, departments, administrators, policy makers and legislators," the insider terminology and theoretical references (citing Baker-Bell, Canagarajah, hooks, and others without explanation) suggests the real audience is composition specialists who already share its concerns.
The statement's explicit purpose is "guidance" on how to "move through these changes responsibly, ethically, and with equanimity." Yet its more urgent implicit purpose emerges in its portrayal of a discipline under "many different assaults" — language that frames the document as a political manifesto rather than mere pedagogical guidelines.
The text's anxieties are both technological and economic. AI receives particular attention as something that "threatens real human to human communication," positioning ChatGPT as an existential risk rather than just another tool. This framing reveals deep concerns about the devaluation of writing instruction as AI seemingly makes traditional writing skills obsolete.
Labor concerns permeate the document with remarkable specificity: "the majority of postsecondary teachers nationwide are now adjuncts without tenure protection, and whose terms of employment rely on them delivering a curriculum in which they have little or no say." This bluntly acknowledges the exploitation of writing instructors who lack both job security and academic freedom, connecting pedagogical concerns directly to labor conditions.
The statement expresses fears of writing education being reduced to helping students merely "avoid grammatical errors" as part of "occupational training." This reveals anxiety about writing's civic purpose being sacrificed to narrowly utilitarian aims that serve business interests rather than democratic engagement. The document specifically calls out "the framing of educational outcomes in purely economic terms" as part of the "many assaults on humanities programs."
What remains conspicuously absent is any acknowledgment that some criticisms of current writing pedagogy might be valid. There is no recognition that alternative approaches to teaching writing might have merit, or that AI tools might offer genuine benefits alongside their risks. This silence suggests a defensive posture from a field that perceives itself as losing cultural and institutional authority in an increasingly technological education landscape.
Bridging the High School-College AI Divide: A Teacher's Perspective
As a high school English teacher who's embraced AI as a learning tool with students, this institutional resistance creates a genuine dilemma. Your students have developed sophisticated skills with AI - thoughtful prompting, critical evaluation of outputs, and using these tools to deepen their understanding rather than circumvent learning. Yet the CCCC document reveals attitudes that may create a jarring transition for them in college.
The reality is more nuanced than the "no AI" message that's cutting through the noise. Looking closely at the CCCC statement, while it expresses clear anxieties about AI "threatening real human to human communication," it also acknowledges that "blanket prohibitions against such tools seem short-sighted and even counter-productive." This suggests at least some recognition that AI tools are here to stay.
Your students might encounter several scenarios in college:
The Strict Prohibitionist: Some professors, particularly those most aligned with the defensive stance in the CCCC document, may ban AI entirely and view any use as cheating. Your students' sophisticated AI skills could be unwelcome or even penalized.
The Critical Engager: Other professors might share the CCCC's concerns but follow its recommendation that "teachers should help both colleagues and students critically examine their own uses, collaborate with learners and community members on the ethical dimensions of these novel tools." These instructors could value your students' thoughtful approach to AI.
The Process Emphasizer: Some may follow the CCCC's suggestion that "an emphasis on process and an ongoing dialogue with students about their reasons for changing drafts is instruction that clearly focuses on student learning rather than rote textual production." Here, your students' ability to articulate how they use AI as part of their process could be an advantage.
The best approach might be to have explicit conversations with your students about this transition. Help them understand the concerns many writing instructors have, the historical and labor contexts that fuel these anxieties, and how to navigate different policies they might encounter. Encourage them to be transparent about their AI skills while being respectful of instructor boundaries.
Teaching students to "code switch" between AI-enhanced writing processes and traditional approaches ensures they maintain fundamental writing skills while also developing their AI literacy. This balanced approach prepares them for diverse expectations they'll encounter in higher education, helping them navigate a landscape where institutional anxiety about AI often overshadows its potential as a learning tool.
Stuck in the Middle of the AI Panic: A Student's Perspective
Everyone's freaking out about AI, and I'm just trying to survive my senior year. The whole thing is totally unfair. I never even touched ChatGPT when it first came out, but that didn't stop Mrs. Peterson from making me stand up in front of everyone and "confess" to cheating. I still get hot with embarrassment thinking about it. Twenty-seven faces staring at me while I had to promise not to use AI again—for something I didn't even do!
After that humiliation, I figured what's the point? If I'm going to get accused anyway, I might as well learn how this stuff actually works. So yeah, I started experimenting with ChatGPT. Not for cheating, but to understand what all the fuss was about.
What I found was actually pretty useful. It helps me brainstorm when I'm stuck on an essay. It explains concepts I don't understand in physics. It even helped me prep for my SAT by generating practice questions. I'm not letting it write my papers—I want to get into premed, not flunk out my freshman year because I never learned to think for myself.
But it feels like I'm caught in this weird in-between space. The older generation is panicking, with teachers contradicting each other about what's allowed. My English teacher says we can use it for brainstorming but not drafting. My history teacher bans it completely. My computer science teacher actually makes us use it and analyze what it gets wrong. Then parents are freaking out about "bot brain rot" like AI is going to turn us all into zombies.
And now I hear college professors are just as confused? Great. So I'm supposed to develop skills for a world that's changing by the minute, while the people teaching me can't even agree on what those skills should be.
I just want clear guidelines. Tell me what's allowed and what isn't. Show me how to use these tools responsibly instead of pretending they don't exist or treating them like some kind of academic nuclear weapon.
The truth is, AI isn't going away. Medicine—the field I want to go into—is already using it for diagnostics and research. Shouldn't I learn how to work with these tools rather than pretend they don't exist? I care about my education. I want to learn. But this chaos of conflicting rules and panic isn't helping anyone.
It's the conflation of the legitimate pedagogical concerns around AI with the social justice issues that I don't care for. She lost me with the comment about eugenics.
Here I thought you were going to write about this:
https://substack.com/home/post/p-161021742?source=queue