This post is prompted by a specific article that I recently saw on multiple social media posts…: “AI Is Destroying the University—and Learning Itself,” written by Ronald Purser and published in Current Affairs in December 2025. The piece is part of numerous, and a growing chorus of, voices arguing that artificial intelligence represents an existential threat to higher education and to learning more broadly.
Purser raises serious concerns worth engaging. Misuse of AI, erosion of academic norms, and institutional hypocrisy are real problems. But my aim here is not to engage in a point-by-point rebuttal. Instead, I want to shift the focus of the conversation. Away from the mere existence of AI and toward the deeper foundational/structural, pedagogical, and ethical issues in higher education that AI is accelerating, amplifying, and, in some cases, revealing.
A Familiar Fear, Revisited
Concerns about technology eroding human capacity are not new. In a recent episode of my Gardner-Webb podcast, The Bulldog Mindset, Associate Dean and Profess or English Dr. Jennifer Buckner noted that In Phaedrus, Plato famously worried that writing itself would weaken memory and diminish the art of speaking. If knowledge could be written down, he argued, people would rely on external marks rather than internal understanding. Memory, rhetoric, and dialogue would suffer.
History tells a different story. Writing did not destroy thinking. It reshaped it. It expanded access, preserved ideas, and enabled forms of reflection that oral culture alone could not sustain. What changed was not whether people thought, but how they thought and where effort was required.
Before we get to AI, it is worth remembering how often this pattern repeats. The arrival of calculators, the internet, Excel, and Wikipedia all triggered fears that their mere existence would undermine learning. In each case, the concern was not simply misuse, but the belief that these tools would inevitably weaken cognition. And in each case, the reality proved more complex. Outcomes depended far less on the tool itself than on how learning environments adapted around it.
AI belongs in this lineage. It is disruptive, unsettling, and unevenly understood. But disruption alone does not equal destruction. The deeper question is what our educational systems choose to reward in its presence.
What AI Actually Reveals
Much of the panic surrounding AI stems from a hard realization: a significant portion of our assignments, assessments, and course activities were already vulnerable. And this vulnerability did not affect all disciplines equally. Fields centered on writing, rhetoric, and the humanities were often hit first and most visibly, precisely because so much learning in those areas is assessed through text-based production.
When tasks can be completed by a system trained on existing patterns, it suggests that those tasks primarily asked students to reproduce information rather than interrogate it.
This is uncomfortable, especially for disciplines where writing, synthesis, and analysis are central. But the discomfort points inward. AI is not erasing learning. It is exposing where learning was thin to begin with.
The same pattern appeared with calculators, spreadsheets, Wikipedia, and online learning. Each innovation triggered claims that students would stop thinking. Each time, the real issue turned out to be design. When instruction evolved, learning endured. When it didn’t, shortcuts flourished.
Misuse Is a Human Choice
The most compelling critiques of AI correctly name misuse. Passing off generated work as original. Automating engagement. Avoiding effort. These behaviors matter, and institutions should address them directly.
But misuse is not an inevitability of the tool. It reflects incentives, norms, and expectations. Students respond to what we signal counts. If speed and completion matter more than process and reflection, AI becomes attractive. If transparency, iteration, critique, and application matter, AI becomes constrained.
At the end of the day, many anti-AI positions assume, implicitly or explicitly, that responsible use is either impossible or highly unlikely. That assumption deserves close examination. Higher education has long claimed that it prepares students for ethical reasoning, self-regulation, and judgment. If we abandon that responsibility precisely when it becomes most difficult, we concede far more than we admit.
Ethics does not emerge from prohibition alone. It emerges from guided practice, clarity about purpose, and environments that make integrity meaningful rather than merely enforceable.
The Work We Can No Longer Avoid
What this moment demands is not blanket resistance, nor uncritical adoption. It requires institutions to do work they have postponed for years.
We must be honest about which assignments actually foster learning and which primarily reward compliance. We must acknowledge that responsibility does not rest solely with a CFO or provost, but across the institution, including faculty. And we must be willing to have difficult conversations about programs themselves.
That work extends beyond individual assignments. It also forces institutions to ask harder questions about programs themselves. Are the outcomes they promise defensible? Are they achievable in ways that remain meaningful in an AI-rich environment? Are they sustainable without relying on increasingly fragile assumptions?
In some cases, redesign will be enough. In others, sunsetting may be the more honest choice. Often, the most promising path forward lies in broader, interdisciplinary programs that draw on existing strengths rather than preserving narrow silos. Integrating philosophy or ethics with science and business is not a retreat from rigor. It is a recognition that complex problems demand integrated thinking, especially when tools can handle routine production.
AI as a Test, Not a Verdict
AI does not determine the future of learning. It tests our clarity about what learning is for.
If education is about depositing information and retrieving it on demand, then AI poses an existential threat. If education is about judgment, context, interpretation, creativity, and human connection, then AI becomes a tool that can sharpen those aims when used with care.
Plato feared writing would hollow out memory. Instead, it expanded human thought. AI may yet do the same, but only if higher education is willing to confront its own habits, assumptions, and avoidance.
The real danger is not that AI exists. It is that we mistake fear for analysis and retreat instead of redesign.