Tag Archives: teaching

Should universities ban, use, or cite Generative AI?

The International Association of Universities (IAU) has asked me to write a short perspective on Generative Artificial Intelligence, which I have been allowed to also post below. It has been published in May 2024 in the IAU’s magazine IAU Horizons Vol. 29 (1), also available in this pdf (scroll to page 28).

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

Picture: Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

Ban, use, or cite Generative AI?

Should universities ban the use of generative AI (GenAI) in written works or, on the contrary, teach how to integrate it into learning practices? Extractive data practices of many available GenAI platforms support the first stance, whereas the general hype around AI and widespread access may favor the second one. However, neither position does justice to the university’s epistemic mission in teaching. Instead of focusing on banning or imposing new information technologies, universities should more than ever strive to provide the conditions within which humans can learn.

Digital transformation

The narrative of AI as a revolutionary force overlooks the foundational role of digitization and connectivity, with the Internet and web technologies pioneering the changes we now attribute to AI. These earlier innovations have profoundly impacted how information is accessed, consumed, created, and distributed. They have been used by our students from early on: From Google searches about topics or the spelling of words to reading Wikipedia articles, from sharing course notes online to asking for homework help in Internet forums, the university learning experience has already been changing long before the arrival of GenAI. At the same time, students’ learning experience has always included taking responsibility for their work, no matter how it was created.

Common misconceptions

Internet and web technologies have also facilitated unprecedented digital data generation and accumulation that have served to create current GenAI models. Today, few would advocate for a complete ban of access to web search or Wikipedia at universities. I find it therefore curious to see how GenAI starts such conversations anew. Why? Because GenAI is neither source nor author. Attributing human-like thinking or consciousness to it is misleading. GenAI does not provide knowledge. It is a powerful computational tool that generates output based on previous data, parameters and probabilities. These outputs can be used by humans for inspiration, modification, copy-paste, or simply be ignored.

At our university, students do not need to reference the use of thesauri, on- and offline dictionaries, writing correction software, or conversations with others about the topic in their writing. I am not fond of the idea of generically referencing the use of GenAI. Ascribing GenAI the status of a source or author to be cited is a profound mischaracterization of how the technology works and further reiterates the AI hype narrative. Moreover, it may wrongly incentivize students to view GenAI output similarly to other types of sources we already ask them to cite. But because GenAI generates individualized output with each request, hence its name, such output cannot be traced back or reproduced in the future. I fail to see what would be gained by citing it, unless it is for specific educational purposes.

Ethical challenges

Should the use of GenAI be encouraged, then? If it is such a powerful computational tool, harnessing its benefits within universities seems not only justified but necessary? However, as ever so often, it is complicated. Thanks to scholars in the humanities and social sciences, as well as activists and journalists, we know better than to uncritically endorse any of these platforms. There are valid points of criticism that can, and should, be brought up against GenAI platforms, such as illegal data acquisition strategies, veiled data labor, lack of basic testing and missing ethical guardrails, dubious business motives, lack of inclusive governance and harmful environmental impact.

Comprehension beyond the hype

What we cannot do is ignore the existence of GenAI platforms easily accessible to our students. In an article for The Guardian, the eminent media scholar Siva Vaidhyanathan warned us in May 2023 already that we might be “committing two grave errors at the same time. We are hiding from and eluding artificial intelligence because it seems too mysterious and complicated, rendering the current, harmful uses of it invisible and undiscussed.” GenAI, its output, and its implications need to be understood in all fields and contexts. This encompasses not only grasping the technical aspects of these technologies but also critically analyzing their social, political, and cultural dimensions. Our goal should thus be to cultivate a safe, positive learning environment that stimulates critical thinking. Ideally, universities foster the necessary skills that allow students to evaluate information and build on existing knowledge to make informed decisions outside of any hype discourse. Such skills will not become less relevant in times of abundant GenAI content but rather more.