Artificial intelligence is not a pandemic
There are concerns that ChatGPT could undermine the educational system, and growing demands for rules governing its use. Gerd Kortemeyer argues that anything but the most common sense regulations could end up being counterproductive.
- Read
- Number of comments
When COVID-19 first hit, we fell over ourselves generating, proclaiming, and retracting rules and restrictions; we produced a torrent of partially contradictory regulations about masks, testing, travel, and vaccinations. Today, certain measures may seem slightly ridiculous, ineffective, or too far-reaching. At the time, though, imperfection was better than inaction: it was imperative to contain the deadly pandemic.
AI is not a deadly pandemic, yet we are in danger of once again scrambling to issue draconian rules and regulations to contain its spread. Compared to other disruptive technologies, tools like ChatGPT have admittedly burst into the public domain rather abruptly, but already there are calls for a complete freeze of the development of new AI models. Entire countries are trying to ban ChatGPT, publishers are expanding author agreements, and universities are rushing to introduce regulations for, but often against, its use. Very likely, such measures will also look slightly ridiculous, ineffective, or too draconian a few months from now.
After the initial shock...
AI is not a pandemic, but a tool – albeit an impressive one. What came as a shock to the educational system is that it can master university admissions exams, get passing grades in introductory science courses, and create essays and presentations with a flurry of plausible fiction. I deliberately use the term “fiction”, as no matter how factual the contents may appear, they are ultimately a statistically probable compilation of text fragments whose sources are unsupported. The text corpus used for training is proprietary, the algorithm throws everything together, and if ChatGPT is asked to supply references, they are completely fictitious.
Even so, its programming, language translation, and text summarizing capabilities are astounding. We will need time to figure out what this says about AI, but also what it says about our educational system.
“Using unmodified text from AI tools and claiming “I wrote this” would clearly be a lie.”Gerd Kortemeyer
In chess tournaments, AI was banned to preserve the human enjoyment of this artful game. In academia, however, we have always been expected to use the most powerful tools available to push the envelope of knowledge. The discussion in higher education cannot be about banning a tool completely, but needs to be about the consequences of this disruption for what and how we teach – with probably only a few hard boundaries covering the abuse of this tool.
More than “just” plagiarism
Rules about plagiarism are not very helpful for setting boundaries, since they were created before AI was viable for everyday use, and they usually focus on passing off someone else’s intellectual property as one’s own. Strictly speaking, unless AI is granted personhood, this does not apply. Instead of focusing on “someone else’s work,” we should focus on “one’s own work”. Using unmodified text from AI tools and claiming “I wrote this” would clearly be a lie.
On the other hand, AI tools can legitimately be used to overcome writer’s block and get a quick overview of the good, bad, and ugly of what is found in its vast text corpus about a certain topic. But then human authors need to make it their own work by separating the wheat from the chaff, and by verifying and validating information from actual scholarly sources. Where exactly in the process “one’s own work”, in the sense of independent scholarship, starts is open to debate, but an outright ban on any AI-generated words or formulations would be over-reaching. Let’s take time to figure this out! Or are we in a hurry, because we fear being embarrassed by giving good grades to ChatGPT?
Better to vaccinate against fake news
Anyway, discussions about cheating with AI are probably less than constructive. Students come to us because they want to learn, and because they value critical thinking, creativity, and independent thought. It’s our responsibility to teach the skills, concepts, methods, and competencies that allow them to perform their best in a world where AI will be ubiquitous. While still laying a solid foundation in math and natural sciences, we will need to revisit some curricula. For example, simple programming exercises might be obsolete, but we may need to focus on computability, algorithms, and information theory. Translating and summarizing texts manually ourselves might be obsolete, but we might need to vaccinate our students against the pandemic of viral fake news, fake science, and deepfakes.