Tag Archives: AI

Should universities ban, use, or cite Generative AI?

The International Association of Universities (IAU) has asked me to write a short perspective on Generative Artificial Intelligence, which I have been allowed to also post below. It has been published in May 2024 in the IAU’s magazine IAU Horizons Vol. 29 (1), also available in this pdf (scroll to page 28).

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

Picture: Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

Ban, use, or cite Generative AI?

Should universities ban the use of generative AI (GenAI) in written works or, on the contrary, teach how to integrate it into learning practices? Extractive data practices of many available GenAI platforms support the first stance, whereas the general hype around AI and widespread access may favor the second one. However, neither position does justice to the university’s epistemic mission in teaching. Instead of focusing on banning or imposing new information technologies, universities should more than ever strive to provide the conditions within which humans can learn.

Digital transformation

The narrative of AI as a revolutionary force overlooks the foundational role of digitization and connectivity, with the Internet and web technologies pioneering the changes we now attribute to AI. These earlier innovations have profoundly impacted how information is accessed, consumed, created, and distributed. They have been used by our students from early on: From Google searches about topics or the spelling of words to reading Wikipedia articles, from sharing course notes online to asking for homework help in Internet forums, the university learning experience has already been changing long before the arrival of GenAI. At the same time, students’ learning experience has always included taking responsibility for their work, no matter how it was created.

Common misconceptions

Internet and web technologies have also facilitated unprecedented digital data generation and accumulation that have served to create current GenAI models. Today, few would advocate for a complete ban of access to web search or Wikipedia at universities. I find it therefore curious to see how GenAI starts such conversations anew. Why? Because GenAI is neither source nor author. Attributing human-like thinking or consciousness to it is misleading. GenAI does not provide knowledge. It is a powerful computational tool that generates output based on previous data, parameters and probabilities. These outputs can be used by humans for inspiration, modification, copy-paste, or simply be ignored.

At our university, students do not need to reference the use of thesauri, on- and offline dictionaries, writing correction software, or conversations with others about the topic in their writing. I am not fond of the idea of generically referencing the use of GenAI. Ascribing GenAI the status of a source or author to be cited is a profound mischaracterization of how the technology works and further reiterates the AI hype narrative. Moreover, it may wrongly incentivize students to view GenAI output similarly to other types of sources we already ask them to cite. But because GenAI generates individualized output with each request, hence its name, such output cannot be traced back or reproduced in the future. I fail to see what would be gained by citing it, unless it is for specific educational purposes.

Ethical challenges

Should the use of GenAI be encouraged, then? If it is such a powerful computational tool, harnessing its benefits within universities seems not only justified but necessary? However, as ever so often, it is complicated. Thanks to scholars in the humanities and social sciences, as well as activists and journalists, we know better than to uncritically endorse any of these platforms. There are valid points of criticism that can, and should, be brought up against GenAI platforms, such as illegal data acquisition strategies, veiled data labor, lack of basic testing and missing ethical guardrails, dubious business motives, lack of inclusive governance and harmful environmental impact.

Comprehension beyond the hype

What we cannot do is ignore the existence of GenAI platforms easily accessible to our students. In an article for The Guardian, the eminent media scholar Siva Vaidhyanathan warned us in May 2023 already that we might be “committing two grave errors at the same time. We are hiding from and eluding artificial intelligence because it seems too mysterious and complicated, rendering the current, harmful uses of it invisible and undiscussed.” GenAI, its output, and its implications need to be understood in all fields and contexts. This encompasses not only grasping the technical aspects of these technologies but also critically analyzing their social, political, and cultural dimensions. Our goal should thus be to cultivate a safe, positive learning environment that stimulates critical thinking. Ideally, universities foster the necessary skills that allow students to evaluate information and build on existing knowledge to make informed decisions outside of any hype discourse. Such skills will not become less relevant in times of abundant GenAI content but rather more.

“If a computer is right 99% of the time, I wouldn’t want to be the 1% case”

A few days ago my FB memories reminded my of the time I discussed Artificial Intelligence on Swiss National Radio during a segment called “Artificial intelligence: between fantasy and reality”. The program was in French, and I’ve always wanted to translate it. Now seems as good a time as any, so no more procrastinating.

The title of this blog post is drawn from the interview and alludes to the fact that if you have been misjudged by AI you could have a hard time rectifying the situation – because algorithmic decision makes it difficult to know whom or what to hold accountable. When reading, please keep in mind that this is drawn from a spoken, non-scripted discussion originally taking place in another language. Furthermore, it’s from one year ago, which is why there is no mention of recent AI initiatives such as AI Now or Ethics and Governance of AI. While it was not my best interview, and there is so. much. more. to say about AI, I might still have managed to get a few major points across… What do you think?

The interview (excerpts)

Picture: Roomba

– Moderator: Artificial Intelligence is a reality we talk about more and more often. AI or the ability of a machine to argue like a human or even better. And as often, some are gleeful about it whereas others paint a darker picture of the future, even predicting the end of mankind. Well, let’s calm down and study the question more calmly. To do this we’ve got two journalists, Huma Khamis and Didier Bonvin, welcome. And we’re with you, Anna Jobin. You’re a sociologist and doctoral candidate at the Laboratory of Science and Technology Studies (STSlab) of Lausanne University. Anna Jobin, to start, what is your implication, your link to this “artificial intelligence”?

AJ: As a sociologist I’m interested in the social aspects of technologies, including AI. My own research centers on how humans cohabit with complex algorithmic systems, something we do already. And this is the link: complex algorithmic systems – which are one sort of AI.

– [Mod] So you link the general population and science? Do you try to understand and interpret them for us?

Well, in my opinion science and the general population are not two distinct entities. It’s a symbiosis with many questions about the use, but also the distribution and creation of these technologies.

Switch to Huma Khamis, who does an excellent job recalling the history of well-publicized applications of AI, from Deep Blue to AlphaGo and YuMi, and reminds everyone that most of us carry AI in our pocket in form of a smartphone. She ends by mentioning Ellie, a robot detecting depression largely based on face recognition technologies.

– [Mod] Anna Jobin, is this real progress? What do you make of this? Would you say we could do better, are we late at this point?

Of course, as has been said, there have been mindblowing advances within the last years. For instance in calculations – they have always been done, but there has been progress in doing them with computers, merging them with technologies, new materials that have only been used for decades… Secondly, there has been an automation of these calculations, an automation made possible by these computers. And as a third ingredient I’d point to data, no matter whether they have been generated by sensors and integrated in the system subsequently, or whether they represent “available” digital traces generated by our activities.

– [Mod] At what moment did we go from automated calculations to things like emotion recognition? Has there been a border, at one point, that has been crossed, or have we made real progress after years of stagnation?

It is an ancient human dream to reproduce that which makes us human. However, one mustn’t forget that what we consider being human has changed over years, decades and centenaries. It is not the first time that we think the essence of humanity is located in the brain, but even this time it is rather novel.

Huma Khamis and Didier Bonvin discuss Ray Kurzweil, his theory of “singularity”, and what makes us human: feelings? imperfections?

– [HK] So Anna Jobin, you’re part of the Laboratory of digital cultures and humanities, do you think this AI will be able to generate a culture and feelings of its own? And to evolve as we evolve with our imperfections? Will it be able to create imperfections?

AI is already creating its own culture if we look at Netflix and its algorithms of suggestion and classification. But it’s always in symbiosis with humans, which is why I think the idea of the “cyborg” is much closer to reality than a neat distinction between mankind and machine. A distinction that is rather recent and considers both as two clearly separated species by, notably, elevating machines to a species on its own. This of course paves the way to “robots rise up and fight for their survival” – which in and by itself is a very interesting vision of things…

But if we speak of the future, what I’m actually interested in is why we speak in a certain way about the future. I think our visions, fears, utopias and dreads reveal more about us today than they do about the future.

– [HK] Speaking of dreads and fears, we spend a lot of time trying to save human treasures, for instance in Digital Humanities. Is this an emergency because we will disappear?

Humans have always aimed at documentation, from oral tradition to writing to printing et cetera. Now that these great tools of information storage are available, that we try to make use of them for archiving and for digitizing our heritage does not seem like a surprising step. Of course they imply questions about the ways in which a format imposes its particularities on the content, but that’s not what we’re discussing today. What seems much more important to me regarding dreads and fears are – without going down to road until the end of human kind – the forms of autonomy within systems that learn “by themselves”, without forgetting that they have at one point been programmed to learn, so there has been a human intervention at the very beginning. […] There have been decisions about, for instance, the process by which the system will learn, or the parameters that will be taken into account for the learning. Although we might not have access neither to the exact process of learning, as is the case in deep learning,  nor to the justification of the results, there have been definitions and human values influencing the system at the very beginning. However, the problems begin if we don’t have access to the process of justification. Let’s imagine a robot will let us know that, according to its calculations, it would be unreasonable to undertake a medical intervention. Because, by taking into account your age and what you contribute to society through your work, a certain medical intervention might simply not be worth it? … Who are you going to discuss things with? Are you going to argue with a robot, a machine? Or a doctor? And which of these options are you more comfortable with?

Follows a discussion about the the Turing test and the chatbot, Eugene Goostman that had been announced to have passed it, before experts quickly denied its “victory”.

– [Mod] What do you think about this Anna Jobin? There’s debate…

The Turing test is very interesting and it has sparked a competition in the development of chatbots, which is great. Then again, it is a small test within a very limited area: conversation, and to be precise: linear conversation, which goes question/answer and so forth. It’s a very limited form of human interaction. If we look at artificial intelligence let’s start by asking the question about intelligence and what we actually mean. Logic intelligence, linguistic intelligence – but is there creative intelligence, emotional intelligence, inter- or intra-personal intelligence? Et cetera. And yes, there is great progress in very specialized areas, and scientific intelligence…

– [Mod] Several areas progress at the same time.

… yes, but to combine all of these and proclaim that the sum of these parts makes a human is, I am convinced, the wrong conclusion.

DB mentions the Open letter on AI and how Stephen Hawkins thinks AI could bring the end of mankind.

The point you’ve been making about being worried that there will be a threat 50 or 100 years from now [in form of a robot uprising]… it’s still rather hypothetical and I suggest we leave it to Hollywood and science fiction authors. However, there’s the much more recent issue of weapons such as L.A.W.S., lethal autonomous weapon systems. These have been very well created by humans. At one point it is a political issue: what do we want to do with these possibilities – no matter whether we call it “AI”, or “technological power”, or whatever. It is a questions for humans, why do we want to use it, what do we want to develop. We’re all impressionable by a robot, and well, a bi-pedic, advancing on two legs…

– [Mod] … you’res speaking of Google’s Atlas robot. It walks on its own on snow, and if pushed it gets up again.

Yes, and that really is impressive technologically speaking. However, let’s not forget that Boston Dynamics is also in the military business, and even if Google makes promises about its use…

– [Mod] … it will only be used for the love of humanity.

To balance things, HK underlines areas where AI is used for good, e.g. the medical domain, care, etc.

– [Mod] Your last words, Anna Jobin?

I’d like to take up what Huma Khamis said. The potential exists, but it is up to humans to make up their minds what they will use it for, it is used for good? But also: are predictions based on the correct model? Meaning: in which area might it be useful to predict the future based on the past, and whether, for instance, statistical evaluations are the right model. If a computer is right 99% of the time, I wouldn’t want to be the 1% case. How are we going to deal with these question with regard to potential harm, with regard to transparency of the process, and with regard to responsibility?

– [Mod] Anna Jobin, sociologist and doctoral candidate at the Laboratory of Science and Technology of Lausanne University, thank you for accepting our invitation.