The Medieval University to the Enlightenment: Evolution of Knowledge and Authority
Universities as we know them in the Western world began in Bologna around 1088: in Paris, which evolved into a university by 1150-1170; and in Oxford, established between 1167-1185. The infant institution would transform intellectual development in Western civilization. In due time the invention of “courses” and “credits” would emerge to make life easier for university registrars.
Bologna emerged as Europe's first university with a focus on law of all things, particularly Roman law. At first the university had no buildings nor dorms, so students banded together to form a “nation,” a recognized self-governed structure, which had rights expanding those of individuals, and the nation could elect its leaders. Its origin as a student-controlled “nation” gave it a different organizational structure from later universities, but it established a theme and tradition of university self-governance apart from political or religious hegemony. Paris, by contrast, grew organically from cathedral schools on the Left Bank largely to educate children of the nobility and became known for theological studies. Oxford, potentially established by scholars migrating from Paris, initially specialized in arts and theology before expanding its curriculum.
The medieval university operated under official papal or royal charters that granted them significant institutional autonomy crucial for creating spaces where ideas could be examined, debated, and developed with relative freedom from immediate political or religious pressures, precisely the space that is under threat in our Trumpian era. Though not independent from church or state power, these institutions spawned the seed of what we might think of as academic freedom.
Medieval Scholastic Foundations
Scholasticism's original approach—the forerunner of our “scholars” of today—centered on reconciling Aristotelian logic with Christian theology. Scholars did their reconciliatory work using cognitive and sociocultural protocols we still follow. Textual analysis of classical and religious works, an early form of close reading perhaps, generated individual perspectives on just what exactly the ancients meant by their texts. Bringing various perspectives together, scholars engaged in dialectical disputations to resolve contradictions among authorities.
Aristotle presented ten fundamental classes into which all debate aspects could be categorized in the Categories. In this work he drew a boundary around the materials underlying debate: We could debate about substance, quantity, quality, relation, place, time, position, state, action, and affection. How to construe the categories and their interrelations was an Aristotelian predicament requiring immediate scholastic attention. Systematic categorization provided common ground for productive reasoning together using definition, division, and syllogistic logic. The predicament continues (Studtmann, 2024):
“Plainly, the enterprise of categorialism inaugurated by Aristotle runs deep in the philosophical psyche. Even so, despite its wide-reaching influence — and, indeed owing to that influence — any attempt to describe categorialism faces a significant difficulty: experts disagree on many of its most important and fundamental aspects.”
Medieval universities institutionalized these methods of textual analysis and debate through curriculum structures that favored commentary over experimentation. Students first encountered authoritative texts through formal lectures, then participated in organized debates where arguments were rigorously tested. This pedagogical approach fostered argumentative skills while still acknowledging the authority of classical and Christian sources.
The Enlightenment Paradigm Shift
The Enlightenment marked a profound transformation in the control of knowledge production and valuation, shifting validation from textual authority and deductive logic to empirical observation and inductive reasoning. This transition built upon medieval foundations while radically redefining research methodologies.
Francis Bacon's Novum Organum (1620) challenged scholastic norms by advocating for inductive reasoning based on systematic data collection; empirical verification through repeatable experiments; rejection of Aristotelian syllogisms as primary truth-finding tools. John Locke's Essay Concerning Human Understanding (1689) and David Hume's works established empiricist principles that further revolutionized approaches to knowledge. Locke’s tabula rasa concept denied innate ideas; we don’t arrive on the planet with Platonic forms in our brains. Instead, we have a blank page on which our senses write our memories. In fact, sensory experience is the basis of all knowledge on Locke’s view (Thomas, 2022). Probabilistic understanding of natural phenomena versus scholastic certainty became the highest standard for truth. Usher in means, standard deviations, and analysis of variance.
“…Bacon wanted to help restore human knowledge to the condition that Adam was said to have had in Paradise, before the Fall, and to restore human communication to the universal language that humankind was said to have had at Babel, before the confusion of tongues described in the Old Testament book of Genesis.”
Institutional Transformations
This epistemic shift manifested through significant institutional changes which revolutionized the meaning of “authority-based” and repositioned the reading of texts and listening to lectures as primary epistemological touchstones:
These changes were practically implemented through scientific societies like the Royal Society (1660), which originated from an 'invisible college' of natural philosophers who had been meeting since the mid-1640s to discuss Francis Bacon's inductive and empirical ideas. They wanted to see a transition from disputations about texts to debates about experimental results and development of peer review systems replacing appeals to magister dixit: “the teacher has said it.”
Legacy and Continuity
What comes from examining this evolution is a fascinating story of rupture and continuity. The Enlightenment preserved scholasticism's systematic rigor while redirecting its analytical tools toward observable phenomena. This synthesis created the methodological foundation for modern scientific inquiry, maintaining logical precision while silencing the voices of the ideologically powerful and shifting the researcher’s gaze from authority toward empirical data collection. Of course, the entirely new problem of distortions from reification—freezing complex objects and phenomena, decontextualizing them, cropping off aspects of things under study which are in real life not what they are under a microscope or when defined as an independent or dependent variable. Counting and calculating “things” deconstructs and renders contexts abstract, risks dehumanizing human experience, and reconstructs structural models as maps which ignore parts of the territory that do not align with the statistical gaze. This problem, for example, shows up again and again in studies of phonics instruction.
The legacy of medieval universities continues to shape our understanding of higher education, even as the Enlightenment fundamentally transformed how we validate and pursue knowledge. The early emphasis on institutional autonomy, structured curriculum, and rigorous debate established principles that would endure through centuries of educational evolution, even as the objects and methods of study dramatically changed.
AI's Radical Presence in University Education: The Unrecognizable Academy
Gone are the days when medieval disputations and Enlightenment verification were fresh and interesting. Today's AI-infused university will likely bear little resemblance to its historical ancestors fifty years from now. If university educators fail to exercise their self-regulatory rights and, instead of working so hard to defend the institution from change, work harder to sustain the values that must persist and devise workable practices that incorporate the power of these new simulation machines, what college is like in fifty or a hundred years could become a caricature of itself. How exactly has artificial intelligence upended academia's ancient foundations?
First, kiss traditional assessment goodbye. No longer content with grading final papers, some universities are already using AI to track and evaluate "the different stages of the learning process itself"—a seismic shift from both medieval disputation and Enlightenment empiricism. The professor's red pen will likely gradually fade into obsolescence as it becomes more clear that magister dixit has shapeshifted from its ancient persona.
Who's the authority now? Certainly not the tweed-jacketed sage on the stage. Students armed with AI systems can access expertise that rivals or surpasses their professors in specific domains, shattering the millennium-old professor-as-knowledge-keeper paradigm into a thousand digital pieces. Here on Substack, one writer published an example of a credible AI dissertation that would take longer to print than it took to write. Of course, this radical displacement of identity feels earth-shaking, but ignoring the shocks and the aftershocks will not change the inevitable. Professors have a starring role to play in revising their academic presence, a role that can’t be fulfilled sniping from the sidelines.
The sacred practice of close reading—that cornerstone of university education since Bologna's founding—finds itself under siege (Watkins, 2024). Instead, the burning question becomes how to "optimize AI's ability to augment human creativity, problem-solving and collaboration" rather than simply analyze dusty tomes. There are substantial ways in which LLMs can scaffold much closer reading than humans alone are capable of.
University walls? What walls? Traditional institutional boundaries dissolve as Michigan partners with Microsoft to develop its own AI (named "MiMaizey," of all things) and Arizona State collaborates directly with OpenAI. The ivory tower now sports revolving doors connecting directly to Silicon Valley. There is so much wrong with this scenario that it boggles my mind, and watching professors clamor to expose the stupidity and questionable morals of administrators who are desperate for their professor’s help rather than pitch in to help right the ship seems a lot like beating heads against walls.
In danger, too, is the competitive model where students are "sorted into categories where some are considered 'better than' others" based on individual grit. Based on old expectations and grading practices, which professors stubbornly cling to, AI tools are poised to democratize knowledge production, leaving academic hierarchies wobbling unsteadily on their ancient foundations.
What about professors? As already mentioned, their role is transforming dramatically, becoming "gardeners who are being asked to nurture two different seeds"—developing both "AI-ready students" and "shaping the future use of AI technologies." From ancient robed sage to gardener: quite the professional identity crisis.
Some visionaries even suggest AI might push universities to "dispense with grading and refocus attention on empathy and learning," potentially "reshaping colleges and universities into dynamic hubs for global change." This binary vision of the university, in my mind, calls for strong and coherent critique—just to disentangle the assumptions. If the contribution universities make to humankind is indeed dependent on providing safe space for highly specialized expertise to develop and expand, perhaps researchers might need to focus on their research rather than on changing the world? AI doesn’t have anything like human research brilliance. Can we stop arguing emotionally and get down to brass tacks?
And what of academic integrity? 96% of instructors suspect cheating has increased since AI adoption, yet studies show no causal link between ChatGPT use and plagiarism. In other words, AI is not the causal variable resulting in the effect of plagiarism. Something else is causing it if it is indeed erupting like the measles under RFK’s medical leadership. That bedrock assumption of causation now seems to require institutions to "develop clear policies about using [AI] in academic work," creating ethical labyrinths never imagined by our scholarly ancestors.
Remember: professors got along well for centuries without grades, credits, and courses. In my experience, policies have never assured integrity. Does anyone else agree with me that a lack of integrity of the sort AI could promote if the field continues to privilege performative knowledge-making could bring down the entire edifice?
From medieval monasteries to AI laboratories, university education's transformation has been nothing short of miraculous. Bologna's founders would surely stand awestruck at what their creation has become. The most distressing part regarding AI for me initially was the widespread (I don’t mean to be hyperbolic, but from where I sit it was widespread) lack of interest in moving beyond emotion to a stance of inquiry. This in no way is intended to obscure the amazing work of professors like Ethan Mollick and many others who took up the challenge very early on.
A more objective analysis of the impact of AI on student learning rather than a largely subjective response of its impact on the role of the professor from the beginning could have generated a collaborative agenda to figure it out. The standard for mentored experience using AI for serious purposes is 10 actual clock hours of rolling up the sleeves with a bot. Once that threshold is crossed, every professor will have firmer ground to stand on. That’s all I ask.
Is that too much to ask?
Love the long view here. Taking it offers some perspective on the promise and peril of significant change in the face of these new educational technologies.