Inclusive AI: empirical data from the civil society

I am thrilled about this most recent collaboration: together with Swissnex San Francisco, the think thank foraus and AI commons I have worked on a report that presents empirical data on what the ethical principle of inclusiveness means when it comes to artificial intelligence. Its title is: “Towards an Inclusive Future in AI – A Global Participatory Process“and it can be accessed for free on the foraus website.

Based on the policy kitchen method (explained in more details in the report) people from four continent have gathered in “11 workshops in 8 countries, involving 10 partner organizations and about 120 participants from a wide range of perspectives, collaboratively generated 43 ideas for an insclusive future in AI.”

The key take-aways of how inclusivity is understood, and can be achieved, are the following:

  1. Aim at inclusive inclusion
  2. Prevent, detect and eliminate bias in AI systems
  3. Establish open standards & access to data
  4. Alleviate power differentials between corporations and individuals
  5. Guarantee shared benefits and prosperity
  6. Provide access to education
  7. Commit to participatory governance

What is most remarkable about this report is the fact that it draws on empirical data from civil society. It therefore applies its own value of inclusivity to the very process by which the conclusions are achieved: by including and listening to stakeholders when it comes to defining what is at stake.

The report is in English — for a short German summary click here.

By the way, our report has been publishes simultaneously with another great paper on “Making Sense of Artificial Intelligence – Why Switzerland Should Support a Scientific UN Panel to Assess the Rise of AI” which I encourage you to read here.

Towards an inclusive future in AI: was bedeutet “inklusive Künstliche Intelligenz”?

Dank Swissnex San Francisco, dem Think Tank foraus und AI Commons durfte ich bei der Ausarbeitung des Reports “Towards an inclusive future in AI. A global participatory process” mitarbeiten. (Hier geht es zum Report, sowie zu einem Kurzbeschrieb auf englisch.)

Am 22. Oktober 2019 fand in Bern eine Pressekonferenz stattfand, wo sowohl unser Report wie auch ein Positionspapier zum Thema AI, resp. zur Schnittstelle von AI Governance und der Schweiz (“Making Sense of Artificial Intelligence – Why Switzerland Should Support a Scientific UN Panel to Assess the Rise of AI”), vorgestellt wurde.

Untenstehend das Transkript meiner deutschen Vorstellung unseres Reports “Towards an Inclusive Future in AI: A Global Participatory Process“.


Künstliche Intelligenz geht uns alle etwas an.

Inklusion, Partizipation, Integration — all das sind wichtige Punkte für eine Technologie, die im Leben von allen Menschen eine immer wichtigere Rolle spielt. Das Prinzip der Inklusion ist tatsächlich sehr wichtig: Es taucht auch in ethischen Richtlinien für künstliche Intelligenz auf der ganzen Welt immer wieder auf. Auf Seite 6 und 7 des Reports sehen Sie einige Beispiele wo und wie “Inklusion” in bestehenden Berichten auftaucht.

Was genau jedoch Inklusion bedeutet, und wie sie erreicht werden kann, sprich: wie Partizipation umgesetzt werden kann, ist eine offene, nicht ganz einfache Frage. Anstelle einer theoretischen Abhandlung haben wir mit Hilfe der Policy Kitchen in einem bottom-up Prozess verschiedene Menschen aus der Zivilgesellschaft auf der ganzen Welt gefragt: “was bedeutet inklusive KI — inklusive künstliche Intelligenz — für euch, und wie kann sie erreicht werden?”

Das Ergebnis zeigt, dass Inklusion nicht mit einem einzigen magischen Rezept erreicht werden kann. Auch unsere Teilnehmenden verbinden mit Inklusion verschiedene, sich gegenseitig ergänzende Ansätze, die jeweils auf verschiedenen Niveaus wirken.

Ich werde die einzelnen Ansätze nun mit ein wenig mehr Details beschreiben. Im Report finden Sie die Zusammenfassung auf Seite 9 als Aufzählung, sowie ab Seite 22 als Schlussfolgerung (“Dessert”). Dabei möchte ich noch einmal betonen, wie wichtig aus gesellschaftpolitischer Perpektive der Prozess war, der zu diesen Ergebnissen geführt hat: Die Ansätze basieren auf Daten. Sie widerspiegeln die Auffassung von verschiedenen Menschen der Zivilgesellschaft in mehreren Ländern wieder.

Zum einen verbinden die Menschen mit Inklusion die Elimination von Bias. Bias — oder Verzerrung, Vorurteil oder Ungleichgewichtigkeit — innerhalb von Systemen mit künstlicher Intelligenz kommt zum Beispiel durch verzerrte Datenbestände zu Stande. Bestehende soziale Vorurteile werden in Daten widergespiegelt und danach durch Maschinen verstärkt.

Gemäss unseren Teilnehmenden muss inklusive KI Vorurteilen aktiv entgegenwirken. Zur Vorbeugung, aber auch Behebung wurden einerseits technische Massnahmen vorgeschlagen, aber auch zum Beispiel Qualitätskontrollen diesbezüglich. Diese sollen zu verschiedenen Zeitpunkten während der Entstehung, aber auch der Nutzung eines KI-Systems, systematisiert werden.

Auch erwähnt wurden organisatorische Neustrukturierung, zum Beispiel “inklusive Teams”, d.h. weniger sozial homogene Teams. Oder institutionalisierte Möglichkeiten für Nutzerinnen und Nutzer um Feedback über Bias geben zu können.

Weitere Vorschläge drehen sich um einen 2. Punkt: Datenzugang und offene Standards — Open Access und Open Standards. Inklusion wird verstanden als die Möglichkeit, Partizipation technisch zu ermöglichen. Denn wenn KI-Technologien in den Händen von Wenigen konzentriert sind, wird der Graben im Laufe der Zeit immer grösser: Je weiter fortgeschritten eine Organisation mit künstlicher Intelligenz bereits ist, umso grösser wird ihr Vorsprung.

Zugang zu Daten, offene Standards damit Daten — aber auch Systeme — genutzt, ja: weiterbenutzt werden können, verringern dieses Machtgefälle und führen zu vermehrter Partizipation bei der Herstellung.

A propos Machtgefälle, das bringt mich auch gleich zum dritten Punkt. Ab Seite 14 sehen Sie den wichtigen Punkt der Verringerung des Machtgefälles zwischen Unternehmen und Einzelpersonen. “User Rights and Transparency.”

Inklusion wird von vielen Menschen verstanden als aktives Arbeiten an der Verringerung dieses Machtgefälles. Die Vorschläge aus der Policy Kitchen in diesem Bereich betreffen vor allem Datensouveränität, Transparenz und Wahlmöglichkeit. Gerade der Wunsch nach Wahlmöglichkeit verbindet die Vorstellung von Inklusion mit einer weniger homogenen technischen Landschaft, wo Technologiehersteller nicht mehr unilateral die Funktionsweisen und Konditionen Ihre Systeme diktieren können.

Continue reading

Artificial Intelligence: how many AI principles or ethics guidelines are there and what do they say?

This is it: the study I had been working on all winter (together with my colleague Marcello and our professor Effy Vayena) was published in Nature Machine Intelligence. It is an in-depth review of a corpus of 84 documents consisting of (or containing) ethical principles for artificial intelligence. Although no single principles occurred in all documents, some are more prevalent than others — and others are strikingly underrepresented.

Here is a link to the article “The global landscape of AI ethics guidelines”: https://www.nature.com/articles/s42256-019-0088-2. Unfortunately it is behind a paywall (and we were not able to select the option of having the article published Open Access), but if you get in touch via e-mail (anna.jobin at sociostrategy), on Social Media, or via ResearchGate, I will be more than happy to send you the article. (*)

This is what the abstract says:

In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.

On twitter I have given a little more information about our findings in a short thread:

There are more tweets, and if you click on the date link you should be able to acces the whole thread.

Although we analyzed 84 documents, many more AI principles and ethics guidelines exist today. For one, there is the time difference between the time one submits the first version of an article to a journal and the moment it is published (peer-review and production take time, and I would like to add that NMI has been much faster than I, a qualitative social scientist, have been used to from other experiences). But there is also another catch-22, due to our research design: our in-depth analysis takes time, and while we were analyzing new guidelines, even more principles would be issued during that time. At one point we simply had to wrap up… This will also explain why our analysis only takes into account the version of the documents our methodology provided us with, and does not account for subsequent versions (the Montreal Declaration, for example, was in stakeholder consultation stage so our analysis is not about its final version).

Therefore, and for methodological reasons, we are only able to provide a snapshot in time. Yet we hope that our research can serve as an overview and a stepping stone for anyone involved with “ethical AI”, from researchers and scholars to technology developers to policy makers.

(*) FWIW we did post a pre-print version on arXiv.org, though I am compelled to highlight that the arXiv version is not identical with the NMI journal version: it is our author version, before peer-review, and in addition to the clarifying modifications we were able to make in the final version thanks to the reviewer comments, one document was initially wrongly attributed to the UK instead of the USA (something we were able to correct thanks to a generous reader comment).

Digitalkompetenzen im Kontext (uvm)

Am 8. Mai 2019 durfte ich auf Einladung der EMEK/COFEM (der eidgenössischen Medienkommission, resp. der commission fédérale des médias) zusammen mit Friederike Tilemann den ersten Drittel eines überaus spannenden Nachmittagsprogramms zum Thema “Streamingdienste und Plattformen: Herausforderungen für Medien & Öffentlichkeit” bestreiten. Wir thematisierten Medien- und Digitalkompetenzen, danach sprachen Judith Möller und Sébastien Noir über die Relevanz von Algorithmen, und zuletzt widmeten sich Natascha Just und Wolfgang Schulz dem Thema Gouvernanz. Der Anlass war öffentlich und gut besucht. Im Folgenden nun ein subjektiver Allerkürzestbericht meinerseits, einschliesslich einer — wie ich doch hoffe — lesefreundlichen Version meiner Eingangspräsentation. Denn, wie heisst es so schön: sharing is caring.

Nach den Grussworten des EMEK/COFEM-Präsidenten Otfried Jarren stellte Manuel Puppis den derzeitigen Stand des Arbeitspapiers der Kommission vor. Die Erkenntnisse des Tages sollen in die bevorstehende neue Version des Dokuments einfliessen. Dies habe, zusammen mit dem Wille zur Förderung des öffentlichen Dialogs, den heutigen Anlass motiviert. Danach teilte Friederike Tilemann ihr Expertenwissen über Medienkompetenzen: gerade Heranwachsende brauchen sowohl Kompetenzen als auch Schutz, um gewinnbringend mit Medien umgehen zu können. Und erstere beinhalten nicht nur Nutzung, sondern auch Wissen, Kritikvermögen, reflektiertes Handeln und Gestaltungsfähigkeit.

Dadurch wurde mir natürlich ganz komfortabel ein Ball zugespielt, den ich nun ins Feld der Digitalisierung bringen konnte. Denn das meiste, was bisher zu Medienkompetenzen gesagt worden ist, bleibt wichtig und relevant. [Das ist übrigens auch aus Genners “Kompetenzen und Grundwerte im digitalen Zeitalter” ersichtlich.] Ich biete keinen Widerspruch sondern Ergänzung an.

Meine Präsentation ging ebenfalls auf die Frage ein, wie mit digitalen Medien umgegangen werden kann, aber aus einer etwas andern Perspektive. Mein Hintergrund ist einerseits in Soziologie, Volkswirtschaft und Wirtschaftsinformatik, andererseits bringe ich durch meine Vergangenheit als selbständige Social Media Beraterin auch praktische Erfahrungen mit. Untenstehend nun meine Folien — die ich, um der Mehrsprachigkeit der Schweiz an diesem gesamtschweizerischen Anlass wenigstens halbwegs gerecht zu werden, auf französisch verfasst hatte — sowie ein ungefähres Transkript meiner 10minütigen Präsentation, angereichert mit Klammerbemerkungen und ein paar Links. Continue reading

#SmartphoneDemokratie

Eine kurze Vorankündigung in eigener Sache: Bald erscheint mein erster Essay in deutscher Sprache! Seit Jahren bin ich eher auf französisch oder englisch unterwegs, und freue mich darum umso mehr über die Gelegenheit, in meiner Muttersprache gelesen werden zu können.

Dies verdanke ich Politologin und Medienexpertin Adrienne Fichter, deren aktuelles Projekt ein Sachbuch über die Schnittstelle von Digitalisierung und Politik ist. Als sie mich angefragt hat, ob ich zu ihrem Buch einen Text über Algorithmen beisteuern würde, habe ich “Mein Steckenpferd!” geantwortet. “Gerne!”

Und so kommt es, dass im Buch “Smartphone-Demokratie”, das im Herbst im Verlag NZZ Libro erscheint, ein Kapitel von mir zu lesen sein wird.

“Smartphone-Demokratie” kann auf der Seite des Buchverlags vorbestellt werden. Offizielles Erscheinungsdatum ist der 16. September 2017.

Die Lektüre der “Smartphone-Demokratie” lohnt sich bestimmt auch wegen der Vielzahl unterschiedlicher Themen, welche die Digitalisierung von unterschiedlichen Blickwinkeln aus angehen. Ein Vorgeschmack dazu gibt der Untertitel des Buches: #FakeNews #Facebook #Bots #Populismus #Weibo #Civictech

Spannend, nicht?

“If a computer is right 99% of the time, I wouldn’t want to be the 1% case”

A few days ago my FB memories reminded my of the time I discussed Artificial Intelligence on Swiss National Radio during a segment called “Artificial intelligence: between fantasy and reality”. The program was in French, and I’ve always wanted to translate it. Now seems as good a time as any, so no more procrastinating.

The title of this blog post is drawn from the interview and alludes to the fact that if you have been misjudged by AI you could have a hard time rectifying the situation – because algorithmic decision makes it difficult to know whom or what to hold accountable. When reading, please keep in mind that this is drawn from a spoken, non-scripted discussion originally taking place in another language. Furthermore, it’s from one year ago, which is why there is no mention of recent AI initiatives such as AI Now or Ethics and Governance of AI. While it was not my best interview, and there is so. much. more. to say about AI, I might still have managed to get a few major points across… What do you think?

The interview (excerpts)

Picture: Roomba

– Moderator: Artificial Intelligence is a reality we talk about more and more often. AI or the ability of a machine to argue like a human or even better. And as often, some are gleeful about it whereas others paint a darker picture of the future, even predicting the end of mankind. Well, let’s calm down and study the question more calmly. To do this we’ve got two journalists, Huma Khamis and Didier Bonvin, welcome. And we’re with you, Anna Jobin. You’re a sociologist and doctoral candidate at the Laboratory of Science and Technology Studies (STSlab) of Lausanne University. Anna Jobin, to start, what is your implication, your link to this “artificial intelligence”?

AJ: As a sociologist I’m interested in the social aspects of technologies, including AI. My own research centers on how humans cohabit with complex algorithmic systems, something we do already. And this is the link: complex algorithmic systems – which are one sort of AI.

– [Mod] So you link the general population and science? Do you try to understand and interpret them for us?

Well, in my opinion science and the general population are not two distinct entities. It’s a symbiosis with many questions about the use, but also the distribution and creation of these technologies.

Switch to Huma Khamis, who does an excellent job recalling the history of well-publicized applications of AI, from Deep Blue to AlphaGo and YuMi, and reminds everyone that most of us carry AI in our pocket in form of a smartphone. She ends by mentioning Ellie, a robot detecting depression largely based on face recognition technologies.

– [Mod] Anna Jobin, is this real progress? What do you make of this? Would you say we could do better, are we late at this point?

Of course, as has been said, there have been mindblowing advances within the last years. For instance in calculations – they have always been done, but there has been progress in doing them with computers, merging them with technologies, new materials that have only been used for decades… Secondly, there has been an automation of these calculations, an automation made possible by these computers. And as a third ingredient I’d point to data, no matter whether they have been generated by sensors and integrated in the system subsequently, or whether they represent “available” digital traces generated by our activities.

– [Mod] At what moment did we go from automated calculations to things like emotion recognition? Has there been a border, at one point, that has been crossed, or have we made real progress after years of stagnation?

It is an ancient human dream to reproduce that which makes us human. However, one mustn’t forget that what we consider being human has changed over years, decades and centenaries. It is not the first time that we think the essence of humanity is located in the brain, but even this time it is rather novel.

Huma Khamis and Didier Bonvin discuss Ray Kurzweil, his theory of “singularity”, and what makes us human: feelings? imperfections?

– [HK] So Anna Jobin, you’re part of the Laboratory of digital cultures and humanities, do you think this AI will be able to generate a culture and feelings of its own? And to evolve as we evolve with our imperfections? Will it be able to create imperfections?

AI is already creating its own culture if we look at Netflix and its algorithms of suggestion and classification. But it’s always in symbiosis with humans, which is why I think the idea of the “cyborg” is much closer to reality than a neat distinction between mankind and machine. A distinction that is rather recent and considers both as two clearly separated species by, notably, elevating machines to a species on its own. This of course paves the way to “robots rise up and fight for their survival” – which in and by itself is a very interesting vision of things…

But if we speak of the future, what I’m actually interested in is why we speak in a certain way about the future. I think our visions, fears, utopias and dreads reveal more about us today than they do about the future.

– [HK] Speaking of dreads and fears, we spend a lot of time trying to save human treasures, for instance in Digital Humanities. Is this an emergency because we will disappear?

Humans have always aimed at documentation, from oral tradition to writing to printing et cetera. Now that these great tools of information storage are available, that we try to make use of them for archiving and for digitizing our heritage does not seem like a surprising step. Of course they imply questions about the ways in which a format imposes its particularities on the content, but that’s not what we’re discussing today. What seems much more important to me regarding dreads and fears are – without going down to road until the end of human kind – the forms of autonomy within systems that learn “by themselves”, without forgetting that they have at one point been programmed to learn, so there has been a human intervention at the very beginning. […] There have been decisions about, for instance, the process by which the system will learn, or the parameters that will be taken into account for the learning. Although we might not have access neither to the exact process of learning, as is the case in deep learning,  nor to the justification of the results, there have been definitions and human values influencing the system at the very beginning. However, the problems begin if we don’t have access to the process of justification. Let’s imagine a robot will let us know that, according to its calculations, it would be unreasonable to undertake a medical intervention. Because, by taking into account your age and what you contribute to society through your work, a certain medical intervention might simply not be worth it? … Who are you going to discuss things with? Are you going to argue with a robot, a machine? Or a doctor? And which of these options are you more comfortable with?

Follows a discussion about the the Turing test and the chatbot, Eugene Goostman that had been announced to have passed it, before experts quickly denied its “victory”.

– [Mod] What do you think about this Anna Jobin? There’s debate…

The Turing test is very interesting and it has sparked a competition in the development of chatbots, which is great. Then again, it is a small test within a very limited area: conversation, and to be precise: linear conversation, which goes question/answer and so forth. It’s a very limited form of human interaction. If we look at artificial intelligence let’s start by asking the question about intelligence and what we actually mean. Logic intelligence, linguistic intelligence – but is there creative intelligence, emotional intelligence, inter- or intra-personal intelligence? Et cetera. And yes, there is great progress in very specialized areas, and scientific intelligence…

– [Mod] Several areas progress at the same time.

… yes, but to combine all of these and proclaim that the sum of these parts makes a human is, I am convinced, the wrong conclusion.

DB mentions the Open letter on AI and how Stephen Hawkins thinks AI could bring the end of mankind.

The point you’ve been making about being worried that there will be a threat 50 or 100 years from now [in form of a robot uprising]… it’s still rather hypothetical and I suggest we leave it to Hollywood and science fiction authors. However, there’s the much more recent issue of weapons such as L.A.W.S., lethal autonomous weapon systems. These have been very well created by humans. At one point it is a political issue: what do we want to do with these possibilities – no matter whether we call it “AI”, or “technological power”, or whatever. It is a questions for humans, why do we want to use it, what do we want to develop. We’re all impressionable by a robot, and well, a bi-pedic, advancing on two legs…

– [Mod] … you’res speaking of Google’s Atlas robot. It walks on its own on snow, and if pushed it gets up again.

Yes, and that really is impressive technologically speaking. However, let’s not forget that Boston Dynamics is also in the military business, and even if Google makes promises about its use…

– [Mod] … it will only be used for the love of humanity.

To balance things, HK underlines areas where AI is used for good, e.g. the medical domain, care, etc.

– [Mod] Your last words, Anna Jobin?

I’d like to take up what Huma Khamis said. The potential exists, but it is up to humans to make up their minds what they will use it for, it is used for good? But also: are predictions based on the correct model? Meaning: in which area might it be useful to predict the future based on the past, and whether, for instance, statistical evaluations are the right model. If a computer is right 99% of the time, I wouldn’t want to be the 1% case. How are we going to deal with these question with regard to potential harm, with regard to transparency of the process, and with regard to responsibility?

– [Mod] Anna Jobin, sociologist and doctoral candidate at the Laboratory of Science and Technology of Lausanne University, thank you for accepting our invitation.

SGS-SSS: Call for papers // Appel à contributions

Tl;dr (in English): Together with a brilliant colleague of mine I organize a panel at the Congress of the Swiss Sociological Association (held in Zurich, June 21-23) about the political dimensions of digital platforms. Please consider contributing in English, German or French by February 20, 2017.

Cf. plus bas pour la version française.

Alle zwei Jahre findet der Kongress der Schweizerischen Gesellschaft für Soziologie statt, das nächste Mal am 21.-23. Juni in Zürich. Zusammen mit Loïse Bilat organisiere ich dort einen akademischen Workshop zum Thema der Informationsfragmentierung aus soziotechnischer Sicht. Der Workshop ist aus der Konvergenz unserer Forschungsinteressen heraus entstanden, denn meine Kollegin ist spezialisiert auf die Analyse ideologisches Gedankenguts, insbesondere der neoliberalen Ideologie. Dazu laden wir interessierte Forschende ein, ihre Ergebnisse und/oder Überlegungen auf englisch, deutsch oder französisch vorzustellen und zu diskutieren.

Common-Good-and-Self-Interest-Sociology-Switzerland-Conference

Hier unser “call for contributions”:

Politische Dimensionen digitaler Plattformen: ein soziotechnischer Ansatz der Informationsfragmentierung

Stichworte: Soziotechnik, STS, Medienepistemologie, Politische Soziologie, Ideengeschichte

Continue reading