Category Archives: Technology

Should universities ban, use, or cite Generative AI?

The International Association of Universities (IAU) has asked me to write a short perspective on Generative Artificial Intelligence, which I have been allowed to also post below. It is forthcoming in the IAU’s magazine IAU Horizons Vol. 29 (1) in May 2024 and will be available on the following page: https://www.iau-aiu.net/IAU-Horizons

Seventeen multicoloured post-it notes are roughly positioned in a strip shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

Picture: Rick Payne and team / Better Images of AI / Ai is… Banner / CC-BY 4.0

Ban, use, or cite Generative AI?

Should universities ban the use of generative AI (GenAI) in written works or, on the contrary, teach how to integrate it into learning practices? Extractive data practices of many available GenAI platforms support the first stance, whereas the general hype around AI and widespread access may favor the second one. However, neither position does justice to the university’s epistemic mission in teaching. Instead of focusing on banning or imposing new information technologies, universities should more than ever strive to provide the conditions within which humans can learn.

Digital transformation

The narrative of AI as a revolutionary force overlooks the foundational role of digitization and connectivity, with the Internet and web technologies pioneering the changes we now attribute to AI. These earlier innovations have profoundly impacted how information is accessed, consumed, created, and distributed. They have been used by our students from early on: From Google searches about topics or the spelling of words to reading Wikipedia articles, from sharing course notes online to asking for homework help in Internet forums, the university learning experience has already been changing long before the arrival of GenAI. At the same time, students’ learning experience has always included taking responsibility for their work, no matter how it was created.

Common misconceptions

Internet and web technologies have also facilitated unprecedented digital data generation and accumulation that have served to create current GenAI models. Today, few would advocate for a complete ban of access to web search or Wikipedia at universities. I find it therefore curious to see how GenAI starts such conversations anew. Why? Because GenAI is neither source nor author. Attributing human-like thinking or consciousness to it is misleading. GenAI does not provide knowledge. It is a powerful computational tool that generates output based on previous data, parameters and probabilities. These outputs can be used by humans for inspiration, modification, copy-paste, or simply be ignored.

At our university, students do not need to reference the use of thesauri, on- and offline dictionaries, writing correction software, or conversations with others about the topic in their writing. I am not fond of the idea of generically referencing the use of GenAI. Ascribing GenAI the status of a source or author to be cited is a profound mischaracterization of how the technology works and further reiterates the AI hype narrative. Moreover, it may wrongly incentivize students to view GenAI output similarly to other types of sources we already ask them to cite. But because GenAI generates individualized output with each request, hence its name, such output cannot be traced back or reproduced in the future. I fail to see what would be gained by citing it, unless it is for specific educational purposes.

Ethical challenges

Should the use of GenAI be encouraged, then? If it is such a powerful computational tool, harnessing its benefits within universities seems not only justified but necessary? However, as ever so often, it is complicated. Thanks to scholars in the humanities and social sciences, as well as activists and journalists, we know better than to uncritically endorse any of these platforms. There are valid points of criticism that can, and should, be brought up against GenAI platforms, such as illegal data acquisition strategies, veiled data labor, lack of basic testing and missing ethical guardrails, dubious business motives, lack of inclusive governance and harmful environmental impact.

Comprehension beyond the hype

What we cannot do is ignore the existence of GenAI platforms easily accessible to our students. In an article for The Guardian, the eminent media scholar Siva Vaidhyanathan warned us in May 2023 already that we might be “committing two grave errors at the same time. We are hiding from and eluding artificial intelligence because it seems too mysterious and complicated, rendering the current, harmful uses of it invisible and undiscussed.” GenAI, its output, and its implications need to be understood in all fields and contexts. This encompasses not only grasping the technical aspects of these technologies but also critically analyzing their social, political, and cultural dimensions. Our goal should thus be to cultivate a safe, positive learning environment that stimulates critical thinking. Ideally, universities foster the necessary skills that allow students to evaluate information and build on existing knowledge to make informed decisions outside of any hype discourse. Such skills will not become less relevant in times of abundant GenAI content but rather more.

Kurzzitat zum offenen Brief zu Künstlicher Intelligenz [in German]

Die Ausgangslage, in einem Satz zusammengefasst: Am Dienstag letzte Woche publizierte das Future of Life Institute (FLI) einen offenen Brief, unterzeichnet sowohl durch respektable Wissenschaftler als auch Charaktere wie EIon Mvsk, der auf die Gefahr von “menschenähnlicher Künstlicher Intelligenz” hinweist und unter anderem einen sechsmonatigen Entwicklungsstop von allen KI-Systemen fordert, welche intelligenter seien als GPT-4.

Aufgrund hoher Medienresonanz verfasste ich als Forscherin am Alexander von Humboldt Institut für Internet und Gesellschaft (HIIG) am Mittwoch Abend eine kurze Einschätzung gebeten, welche ich hier in voller Länge wiedergebe:

Der offene Brief des Future of Life Instituts ist Augenwischerei: Er beschreibt eine Phantasiewelt, in der bisherige KI bis auf ein paar technische Updates problemfrei ist und sechs Monate Entwicklungsstopp genügen, um geeignete regulatorische Rahmenbedingungen für die angeblich unausweichliche Superintelligenz zu schaffen.

So werden die Probleme von heute existierenden Systemen nicht nur ignoriert, sondern verharmlost. Das Superintelligenz-Thema gehört in Philosophieseminare und nicht in die Politik.

Auch ist der alleinige Fokus des Briefs auf die Entwicklungsstufe von KI sehr kurzsichtig. Seltsamerweise wird keine Einschränkung beim Einsatz gefordert, obwohl bei KI der Kontext der Anwendung mindestens genauso wichtig ist wie die Entstehung.

Schon fast zynisch mutet die Erwähnung von grundsätzlich sinnvollen Massnahmen wie Audits oder Kennzeichnung an, wenn sich unter den Erstunterzeichner Entscheidungsträger finden, die solche selbst nie eingeführt haben.

Und obwohl im Brief richtigerweise steht, dass wichtige Entscheidungen nicht ungewählten Tech Leadern überlassen werden sollten, erreicht die Publikation nun genau das Gegenteil: Agenda Setting durch Tech Leader.

Honi soit qui mal y pense.

a forest in the sunEinige meiner Worte fanden tatsächlich am Freitag ihren Weg in die TAZ, in den Tagesspiegel KI & Digitalisierung, sowie in die Berichterstattung auf Netzpolitik.org.

Ebenfalls am Freitag publizierten übrigens auch die Autorinnen des wegweisenden KI-Artikels “Stochastic parrots” ein Statement, das den offenen Brief des FLI ebenfalls kritisiert und zudem fundiert erklärt, warum die hypothetische Schwarzmalerei von zukünftiger machtvoller KI schädlich ist.

Inclusive AI: empirical data from the civil society

I am thrilled about this most recent collaboration: together with Swissnex San Francisco, the think thank foraus and AI commons I have worked on a report that presents empirical data on what the ethical principle of inclusiveness means when it comes to artificial intelligence. Its title is: “Towards an Inclusive Future in AI – A Global Participatory Process“and it can be accessed for free on the foraus website.

Based on the policy kitchen method (explained in more details in the report) people from four continent have gathered in “11 workshops in 8 countries, involving 10 partner organizations and about 120 participants from a wide range of perspectives, collaboratively generated 43 ideas for an insclusive future in AI.”

The key take-aways of how inclusivity is understood, and can be achieved, are the following:

  1. Aim at inclusive inclusion
  2. Prevent, detect and eliminate bias in AI systems
  3. Establish open standards & access to data
  4. Alleviate power differentials between corporations and individuals
  5. Guarantee shared benefits and prosperity
  6. Provide access to education
  7. Commit to participatory governance

What is most remarkable about this report is the fact that it draws on empirical data from civil society. It therefore applies its own value of inclusivity to the very process by which the conclusions are achieved: by including and listening to stakeholders when it comes to defining what is at stake.

The report is in English — for a short German summary click here.

By the way, our report has been publishes simultaneously with another great paper on “Making Sense of Artificial Intelligence – Why Switzerland Should Support a Scientific UN Panel to Assess the Rise of AI” which I encourage you to read here.

“If a computer is right 99% of the time, I wouldn’t want to be the 1% case”

A few days ago my FB memories reminded my of the time I discussed Artificial Intelligence on Swiss National Radio during a segment called “Artificial intelligence: between fantasy and reality”. The program was in French, and I’ve always wanted to translate it. Now seems as good a time as any, so no more procrastinating.

The title of this blog post is drawn from the interview and alludes to the fact that if you have been misjudged by AI you could have a hard time rectifying the situation – because algorithmic decision makes it difficult to know whom or what to hold accountable. When reading, please keep in mind that this is drawn from a spoken, non-scripted discussion originally taking place in another language. Furthermore, it’s from one year ago, which is why there is no mention of recent AI initiatives such as AI Now or Ethics and Governance of AI. While it was not my best interview, and there is so. much. more. to say about AI, I might still have managed to get a few major points across… What do you think?

The interview (excerpts)

Picture: Roomba

– Moderator: Artificial Intelligence is a reality we talk about more and more often. AI or the ability of a machine to argue like a human or even better. And as often, some are gleeful about it whereas others paint a darker picture of the future, even predicting the end of mankind. Well, let’s calm down and study the question more calmly. To do this we’ve got two journalists, Huma Khamis and Didier Bonvin, welcome. And we’re with you, Anna Jobin. You’re a sociologist and doctoral candidate at the Laboratory of Science and Technology Studies (STSlab) of Lausanne University. Anna Jobin, to start, what is your implication, your link to this “artificial intelligence”?

AJ: As a sociologist I’m interested in the social aspects of technologies, including AI. My own research centers on how humans cohabit with complex algorithmic systems, something we do already. And this is the link: complex algorithmic systems – which are one sort of AI.

– [Mod] So you link the general population and science? Do you try to understand and interpret them for us?

Well, in my opinion science and the general population are not two distinct entities. It’s a symbiosis with many questions about the use, but also the distribution and creation of these technologies.

Switch to Huma Khamis, who does an excellent job recalling the history of well-publicized applications of AI, from Deep Blue to AlphaGo and YuMi, and reminds everyone that most of us carry AI in our pocket in form of a smartphone. She ends by mentioning Ellie, a robot detecting depression largely based on face recognition technologies.

– [Mod] Anna Jobin, is this real progress? What do you make of this? Would you say we could do better, are we late at this point?

Of course, as has been said, there have been mindblowing advances within the last years. For instance in calculations – they have always been done, but there has been progress in doing them with computers, merging them with technologies, new materials that have only been used for decades… Secondly, there has been an automation of these calculations, an automation made possible by these computers. And as a third ingredient I’d point to data, no matter whether they have been generated by sensors and integrated in the system subsequently, or whether they represent “available” digital traces generated by our activities.

– [Mod] At what moment did we go from automated calculations to things like emotion recognition? Has there been a border, at one point, that has been crossed, or have we made real progress after years of stagnation?

It is an ancient human dream to reproduce that which makes us human. However, one mustn’t forget that what we consider being human has changed over years, decades and centenaries. It is not the first time that we think the essence of humanity is located in the brain, but even this time it is rather novel.

Huma Khamis and Didier Bonvin discuss Ray Kurzweil, his theory of “singularity”, and what makes us human: feelings? imperfections?

– [HK] So Anna Jobin, you’re part of the Laboratory of digital cultures and humanities, do you think this AI will be able to generate a culture and feelings of its own? And to evolve as we evolve with our imperfections? Will it be able to create imperfections?

AI is already creating its own culture if we look at Netflix and its algorithms of suggestion and classification. But it’s always in symbiosis with humans, which is why I think the idea of the “cyborg” is much closer to reality than a neat distinction between mankind and machine. A distinction that is rather recent and considers both as two clearly separated species by, notably, elevating machines to a species on its own. This of course paves the way to “robots rise up and fight for their survival” – which in and by itself is a very interesting vision of things…

But if we speak of the future, what I’m actually interested in is why we speak in a certain way about the future. I think our visions, fears, utopias and dreads reveal more about us today than they do about the future.

– [HK] Speaking of dreads and fears, we spend a lot of time trying to save human treasures, for instance in Digital Humanities. Is this an emergency because we will disappear?

Humans have always aimed at documentation, from oral tradition to writing to printing et cetera. Now that these great tools of information storage are available, that we try to make use of them for archiving and for digitizing our heritage does not seem like a surprising step. Of course they imply questions about the ways in which a format imposes its particularities on the content, but that’s not what we’re discussing today. What seems much more important to me regarding dreads and fears are – without going down to road until the end of human kind – the forms of autonomy within systems that learn “by themselves”, without forgetting that they have at one point been programmed to learn, so there has been a human intervention at the very beginning. […] There have been decisions about, for instance, the process by which the system will learn, or the parameters that will be taken into account for the learning. Although we might not have access neither to the exact process of learning, as is the case in deep learning,  nor to the justification of the results, there have been definitions and human values influencing the system at the very beginning. However, the problems begin if we don’t have access to the process of justification. Let’s imagine a robot will let us know that, according to its calculations, it would be unreasonable to undertake a medical intervention. Because, by taking into account your age and what you contribute to society through your work, a certain medical intervention might simply not be worth it? … Who are you going to discuss things with? Are you going to argue with a robot, a machine? Or a doctor? And which of these options are you more comfortable with?

Follows a discussion about the the Turing test and the chatbot, Eugene Goostman that had been announced to have passed it, before experts quickly denied its “victory”.

– [Mod] What do you think about this Anna Jobin? There’s debate…

The Turing test is very interesting and it has sparked a competition in the development of chatbots, which is great. Then again, it is a small test within a very limited area: conversation, and to be precise: linear conversation, which goes question/answer and so forth. It’s a very limited form of human interaction. If we look at artificial intelligence let’s start by asking the question about intelligence and what we actually mean. Logic intelligence, linguistic intelligence – but is there creative intelligence, emotional intelligence, inter- or intra-personal intelligence? Et cetera. And yes, there is great progress in very specialized areas, and scientific intelligence…

– [Mod] Several areas progress at the same time.

… yes, but to combine all of these and proclaim that the sum of these parts makes a human is, I am convinced, the wrong conclusion.

DB mentions the Open letter on AI and how Stephen Hawkins thinks AI could bring the end of mankind.

The point you’ve been making about being worried that there will be a threat 50 or 100 years from now [in form of a robot uprising]… it’s still rather hypothetical and I suggest we leave it to Hollywood and science fiction authors. However, there’s the much more recent issue of weapons such as L.A.W.S., lethal autonomous weapon systems. These have been very well created by humans. At one point it is a political issue: what do we want to do with these possibilities – no matter whether we call it “AI”, or “technological power”, or whatever. It is a questions for humans, why do we want to use it, what do we want to develop. We’re all impressionable by a robot, and well, a bi-pedic, advancing on two legs…

– [Mod] … you’res speaking of Google’s Atlas robot. It walks on its own on snow, and if pushed it gets up again.

Yes, and that really is impressive technologically speaking. However, let’s not forget that Boston Dynamics is also in the military business, and even if Google makes promises about its use…

– [Mod] … it will only be used for the love of humanity.

To balance things, HK underlines areas where AI is used for good, e.g. the medical domain, care, etc.

– [Mod] Your last words, Anna Jobin?

I’d like to take up what Huma Khamis said. The potential exists, but it is up to humans to make up their minds what they will use it for, it is used for good? But also: are predictions based on the correct model? Meaning: in which area might it be useful to predict the future based on the past, and whether, for instance, statistical evaluations are the right model. If a computer is right 99% of the time, I wouldn’t want to be the 1% case. How are we going to deal with these question with regard to potential harm, with regard to transparency of the process, and with regard to responsibility?

– [Mod] Anna Jobin, sociologist and doctoral candidate at the Laboratory of Science and Technology of Lausanne University, thank you for accepting our invitation.

[French] Algorithmes: entretien et suggestions de lectures

Prologue

Un magazine grand public paru cette semaine m’a cité dans le cadre d’un dossier sur les algorithmes. Intitulé “Les algorithmes veulent-ils notre peau?”, il ne donne pas de réponse définitive à la question posée mais aborde le sujet sous plusieurs angles en donnant la parole à des spécialistes de différents domaines.

L’article a été rédigé peu avant les élections américaines mais son sujet ne pourrait guère être davantage au coeur de l’actualité: parmi d’autres thématiques brulantes (telles que notamment la responsabilité des médias et de leur approche journalistique, la fonction des sondages, les facteurs sociaux qui favorisent l’extremisme et l’autoritarianisme) le résultat surprenant de cette élection présidentielle a également attiré l’attention sur le rôle potentiellement joué par les plateformes en ligne et leur gestion algorithmique des news, vraies ou fausses.

Pour son dossier dans Femina, le journaliste Nicolas Poinsot m’avait posé sept questions dont seule une petite partie des réponses s’est retrouvée dans la version finale faute de place. Il m’a gentiment donné la permission de reproduire l’entretien dans son intégralité, que vous pouvez lire ci-dessous. La question de l’influence des plateformes numériques sur la politique actuelle n’y est pas abordée, mais en vue de l’actualité il me semble bon d’ajouter quelques propositions de lecture à la fin de ce billet.

L’entretien

– Quels domaines de notre vie sont concernés par les algorithmes?
AJ: Dès que nous utilisons internet, un outil numérique ou simplement un appareil automatisé, nous interagissons directement avec des systèmes algorithmiques. S’y ajoute l’influence indirecte des algorithmes, par exemple le fait que nous habitions un monde de plus en plus optimisé pour une gestion algorithmique, que nous en fassions usage ou non.

– Observe-t-on une augmentation de l’usage de ces algorithmes depuis ces dernières années. Et si oui pourquoi?
AJ: Oui, clairement, et c’est lié à la numérisation. Il convient d’en distinguer deux caractéristiques principales: d’un côté, les algorithmes numériques permettent d’automatiser un grand nombre de tâches et processus à coût relativement faible. De l’autre côté, il y a l’optimisation: grâce au traitement automatique des données numériques ces dernières peuvent être récoltées, stockées et exploitées de manière exhaustive et très ciblée.

– Quelles sont les évolutions et les excès possibles avec le “deep learning”? Continue reading

Technology, innovation and society: five myths debunked

Recently, I held a lecture about the digital transformation for the franco-swiss CAS/EMBA program in e-tourism. The tourism industry not being my specialty, and the “social media” aspects having been thoroughly covered by colleagues,Media Technology old and new I had been specifically asked to convey a big picture view.

I chose to address some overall issues related to ICT (information & communication technology), innovation and society by debunking the following five myths:

  1. Ignoring the digital transformation is possible
  2. Technological progress is linear
  3. Connectivity is a given
  4. Virtual vs. “real” life
  5. Big Data – the answer to all our questions

Each of these points would deserve an treatise on its own, and I will not be able to go into much details in the scope of this article. I nevertheless wanted to share some of the links and references mentioned during my lecture and related to these issues. If you prefer reading the whole thing in French, please go to Enjeux technologiques et sociaux: cinq idées reçues à propos du numérique, which is the corresponding (but not literally translated) article in French.

Myth no. 1: Ignoring the digital transformation is possible

While discussions of online social networks have become mainstream, the digital transformation goes way beyond social media. It is about more than visible communication. It is about automation, computation, and algorithms. And as I have written before: algorithms are more than a technological issue because they involve not only automated data analysis, but also decision-making. In 1961 already, C.P. Snow said:

«Those who don’t understand algorithms, can’t understand how the decisions are made.»

In order to illustrate the vastness of computation and algorithmic automation I mentioned Frédéric Kaplan’s information mushroom (“champignon informationnel”), my explorations of Google Autocomplete, as well as the susceptibility of a job to be made redundant in the near future by machine learning and mobile robotics (cf. this scientific working paper, or the interactive visualisation derived from it).

Myth no. 2: Technological progress is linear

This point included a little history including sociology of knowledge and innovation studies.

Continue reading

[French] Enjeux technologiques et sociaux: 5 idées reçues à propos du numérique

Exceptionally, this article is in French. English speaking readers might want to head over to Technology, innovation and society: five myths debunked.

Cet article esquisse mon intervention dans un module de formation EMBA / CAS il y a quelques jours. Le but était de sensibiliser les participants aux enjeux des technologies de l’information comme sources d’innovations majeures et de les rendre attentifs à quelques enjeux sociaux des TIC. Afin qu’un tour d’horizon aussi vaste soit un tant soit peu digeste, j’ai décidé de le présenter en cinq chapitres qui démontent certaines idées reçues à propos du numérique:

  1. Il est possible d’ignorer le numérique
  2. Le progrès technologique est linéaire
  3. La connectivité est un acquis
  4. Il y a le virtuel et il y a la “vraie vie”
  5. Les “big data”: la solution à tout

En voici ci-dessous la présentation, et ensuite quelques phrases explicatives avec liens/références.

La présentation:

Idée reçue no. 1: Il est possible d’ignorer le numérique

Le domaine du numérique est souvent considéré uniquement dans une perspective communication/marketing, une perspective parfois réduite aux seuls sujets des sites web et des réseaux sociaux en ligne. Et alors qu’il est possible pour une entreprise notamment de se passer d’une page facebook en toute cohérence avec sa stratégie, il n’en est pas de même avec la dynamique et l’évolution numérique au sens large. Ce parce que la révolution numérique ne concerne de loin pas que les “social media”. Elle comprend toute sorte d’automatisation algorithmique. Une citation parlante à ce sujet a été dit par C.P. Snow en 1961 déjà et je l’avais reprise dans un billet précédent (en anglais) il y a deux ans et demi:

«Those who don’t understand algorithms, can’t understand how the decisions are made.»

Illustrant quelques enjeux d’automatisation algorithmique, j’ai mentionné le “champignon informationnel” de Frédéric Kaplan, mes explorations de Google Autocomplete, et les calculs de la “probabilité de remplaçabilité” d’un emploi (provenant d’un working paper scientifique, transformés en visualisation interactive) grâce aux avancées dans les domaines du machine learning et de la robotique mobile.

Idée reçue no. 2: Le progrès technologique est linéaire

Pour ce point, une petite plongée dans la sociologie de la connaissance et de la technologie:

Continue reading