I teach at a small, public liberal arts college in one of the smallest Canadian provinces, New Brunswick. I believe the value proposition of St. Thomas is fairly straightforward: we provide a high-touch, liberal arts education, with a small supportive community for an accessible price, in a safe, inexpensive fairly idyllic college town (Fredericton, NB). The Great Books Program at St. Thomas, of which I am the director, is probably the cheapest such program in North America. I’m a graduate of the program, and I think we have punched well above our weight in student flourishing and success: we’ve had two Rhodes Scholars in the last three years, we’ve had students placed in top graduate programs (Harvard, Toronto, Cambridge, etc. etc.), our alumni are successful in pretty much every field of endeavour you can imagine. Again, I don’t think the case for St. Thomas, or for our program, is difficult to make. And yet, our university’s enrolment numbers, hurt even more by the pandemic, don’t reflect what I consider to be a fundamentally excellent “product” at a competitive price.1
Everyone knows the higher education “sector” is evidently headed for a cliff, and we’re assailed for our putative irrelevance from all sides. I’m not going to waste your time with the same tiresome, gloomy prognostications about the liberal arts. I find the discourse around liberal education irritating because it frequently claims the kind of thing I do every day is dead or impossible. But we’re still here, for now.
The latest iteration of ChatGPT has been live for about a month, and we in higher education are naturally concerned about what it means for us and for our students. Everyone immediately noted that the algorithm writes passable freshman essays in response to the usual sorts of prompts. I do not think that ChatGPT represents a threat to even large lecture classes that differs qualitatively from the generic menace of the internet: students already have access to everything ChatGPT “knows,” and apps like grammarly (to say nothing of the venerable spellchecker) already offload much copyediting to AI. There is a difference, however: If a student were to produce a paper on, say, Hobbes’ Leviathan through the judicious, undocumented use of internet sources, they might still inadvertently learn a few things as they read enough to know where to hit command + c, make sure all their fonts match, and change enough words to make the plagiarism google- or turitin - proof. ChatGPT offers the understandably stressed, truly lazy, or downright vicious plagiarist a new tool.2
But my response to all of this is to shrug. This is because I believe ChatGPT is simply more evidence of what I already believe to be the case: the future of education, and liberal arts education in particular, is small: small, discussion-based classes, professors who know their students, personalized assignments and assessment, intellectual community. Scaffolded, iterative essay assignments and oral or in-class exams, simply can’t be done by ChatGPT.
Institutions like mine can’t compete with some of the perks of the bigger universities in Canada and the U.S., but here’s an area where we excel: I know my students, most of whom will have had me for four years by the time they graduate from our program. I’m able to track their progress and help them develop over time. I listen to their comments in class day in, and day out. If they submitted an AI-generated paper, I’d be able to tell. This isn’t boasting, it’s simply one of the very significant benefits of my specific institution. Other institutions have other challenges (as do the poor faculty at such institutions); our problem, and the problem at many liberal arts colleges, is a baffling lack of students. My hope, maybe naive, is that people will start to figure out that there’s a difference between a credential an algorithm can earn, and one an algorithm cannot. Education is fundamentally not transactional, nor is it a product that can be bought and sold. It doesn’t “scale.” Education is a relationship. If I were a higher education administrator, I would begin from the premise that this is what education is, and work out from there as a way to make it happen. For the time being, I suppose I can rest easily enough knowing the solution to at least one problem.
Do you know someone who’d like to study the Great Books in Canada? Hit me up, and I’ll put you in contact with an admissions counsellor.
As usual, the question “Why will students plagiarize?” strikes me as apposite. If a student is plagiarizing any number of things have gone wrong long before the nefarious googling even gets started. Professors underestimate our role in creating conditions productive of academic honesty.
The academic panic over GPT seems to confuse two concerns: first, the AI does the assignment better than your average student, and second, that the AI is "undetectable" as a plagiarism aid. The first point seems to be true but irrelevant since the AI is not your student, and the second point is plainly false, since it's quite detectable once you see how it works a couple times. I don't have small classes or even know all my students' names, and I easily noticed several AI-generated papers this term. In fact, they're much easier to notice than when a student pays/asks/seduces a classmate or someone else to write papers for them, because the results of these efforts look actually indistinguishable from an "honest" paper (including, sometimes, indistinguishably bad!) and there is no accessible evidence to ground one's suspicions. But that mode of cheating has been around forever. Honestly, I don't understand this level of panic. Academics are just hyperventilators who constantly require a bag to breathe in.