Category Archives: Analysis

Artificial Intelligence: how many AI principles or ethics guidelines are there and what do they say?

This is it: the study I had been working on all winter (together with my colleague Marcello and our professor Effy Vayena) was published in Nature Machine Intelligence. It is an in-depth review of a corpus of 84 documents consisting of (or containing) ethical principles for artificial intelligence. Although no single principles occurred in all documents, some are more prevalent than others — and others are strikingly underrepresented.

Here is a link to the article “The global landscape of AI ethics guidelines”: https://www.nature.com/articles/s42256-019-0088-2. Unfortunately it is behind a paywall (and we were not able to select the option of having the article published Open Access), but if you get in touch via e-mail (anna.jobin at sociostrategy), on Social Media, or via ResearchGate, I will be more than happy to send you the article. (*)

This is what the abstract says:

In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.

On twitter I have given a little more information about our findings in a short thread:

There are more tweets, and if you click on the date link you should be able to acces the whole thread.

Although we analyzed 84 documents, many more AI principles and ethics guidelines exist today. For one, there is the time difference between the time one submits the first version of an article to a journal and the moment it is published (peer-review and production take time, and I would like to add that NMI has been much faster than I, a qualitative social scientist, have been used to from other experiences). But there is also another catch-22, due to our research design: our in-depth analysis takes time, and while we were analyzing new guidelines, even more principles would be issued during that time. At one point we simply had to wrap up… This will also explain why our analysis only takes into account the version of the documents our methodology provided us with, and does not account for subsequent versions (the Montreal Declaration, for example, was in stakeholder consultation stage so our analysis is not about its final version).

Therefore, and for methodological reasons, we are only able to provide a snapshot in time. Yet we hope that our research can serve as an overview and a stepping stone for anyone involved with “ethical AI”, from researchers and scholars to technology developers to policy makers.

(*) FWIW we did post a pre-print version on arXiv.org, though I am compelled to highlight that the arXiv version is not identical with the NMI journal version: it is our author version, before peer-review, and in addition to the clarifying modifications we were able to make in the final version thanks to the reviewer comments, one document was initially wrongly attributed to the UK instead of the USA (something we were able to correct thanks to a generous reader comment).

[French] Algorithmes: entretien et suggestions de lectures

Prologue

Un magazine grand public paru cette semaine m’a cité dans le cadre d’un dossier sur les algorithmes. Intitulé “Les algorithmes veulent-ils notre peau?”, il ne donne pas de réponse définitive à la question posée mais aborde le sujet sous plusieurs angles en donnant la parole à des spécialistes de différents domaines.

L’article a été rédigé peu avant les élections américaines mais son sujet ne pourrait guère être davantage au coeur de l’actualité: parmi d’autres thématiques brulantes (telles que notamment la responsabilité des médias et de leur approche journalistique, la fonction des sondages, les facteurs sociaux qui favorisent l’extremisme et l’autoritarianisme) le résultat surprenant de cette élection présidentielle a également attiré l’attention sur le rôle potentiellement joué par les plateformes en ligne et leur gestion algorithmique des news, vraies ou fausses.

Pour son dossier dans Femina, le journaliste Nicolas Poinsot m’avait posé sept questions dont seule une petite partie des réponses s’est retrouvée dans la version finale faute de place. Il m’a gentiment donné la permission de reproduire l’entretien dans son intégralité, que vous pouvez lire ci-dessous. La question de l’influence des plateformes numériques sur la politique actuelle n’y est pas abordée, mais en vue de l’actualité il me semble bon d’ajouter quelques propositions de lecture à la fin de ce billet.

L’entretien

– Quels domaines de notre vie sont concernés par les algorithmes?
AJ: Dès que nous utilisons internet, un outil numérique ou simplement un appareil automatisé, nous interagissons directement avec des systèmes algorithmiques. S’y ajoute l’influence indirecte des algorithmes, par exemple le fait que nous habitions un monde de plus en plus optimisé pour une gestion algorithmique, que nous en fassions usage ou non.

– Observe-t-on une augmentation de l’usage de ces algorithmes depuis ces dernières années. Et si oui pourquoi?
AJ: Oui, clairement, et c’est lié à la numérisation. Il convient d’en distinguer deux caractéristiques principales: d’un côté, les algorithmes numériques permettent d’automatiser un grand nombre de tâches et processus à coût relativement faible. De l’autre côté, il y a l’optimisation: grâce au traitement automatique des données numériques ces dernières peuvent être récoltées, stockées et exploitées de manière exhaustive et très ciblée.

– Quelles sont les évolutions et les excès possibles avec le “deep learning”? Continue reading

Google Autocomplete revisited

Google autocomplete showing autocomplete suggestions for the search query "Google autocomplete re"«Did Google Manipulate Search for [presidential candidate]?» was the title of a video that showed up in my facebook feed. In it, the video host argued that upon entering a particular presidential candidate’s name into Google’s query bar, very specific autocomplete suggestions are not showing up although – according to the host – they should.

I will address the problems with this claim at a later point, but let’s start by noting that the argument was quickly picked up (and sometimes transformed) by blogs and news outlets alike, inspiring titles such as «Google searches for [candidate] yield favorable autocomplete results, report shows», «Did [candidate]’s campaign boost her image with a Google bomb?», «Google is manipulating search results in favor of [candidate]», and «Google Accused of Rigging Search Results to Favor [candidate]». (The perhaps most accurate title of the first wave of reporting is by the Washington Times, stating «Google accused of manipulating searches, burying negative stories about [candidate]».)

I could not help but notice the shift of focus from Google Autocomplete to Google Search results in some of the reporting, and there is of course a link between the two. But it is important to keep in mind that manipulating autocomplete suggestions is not the same as manipulating search results, and careless sweeping statements are no help if we want to understand what is going on, and what is at stake – which is what I had set out to do for the first time almost four years ago.

Indeed, Google Autocomplete is not a new topic. For me, it started in 2012, when my transition from entrepreneurship/consultant into academia was smoothed by a temporary appointment at the extremely dynamic, innovative DHLab. My supervising professor was a very rigorous mentor all while giving me great freedom to explore the topics I cared about. Between his expertise in artificial intelligence and digital humanities and my background in sociology, political economy and information management we identified a shared interest in researching Google Autocomplete algorithms. I presented the results of our preliminary study in Lincoln NE at DH2013, the annual conference of Digital Humanities. We argued that autocompletions can be considered “linguistic prosthesis” because they mediate between our thoughts and how we express these thought in written language. Furthermore, we underlined how mediation by autocompletion algorithms acts in a particularly powerful way because it intervenes before we have completed formulating our thoughts in writing and may therefore have the potential to influence actual search queries. A great paper by Baker & Potts, published in 2013, has come to the same conclusion and questions “the extent to which such algorithms inadvertently help to perpetuate negative stereotypes“.

Back to the video and its claim that, upon entering a particular presidential candidate’s name into Google’s query bar, very specific autocomplete suggestions are not showing up although they should. But why should they show up? The explanation Continue reading

Technology, innovation and society: five myths debunked

Recently, I held a lecture about the digital transformation for the franco-swiss CAS/EMBA program in e-tourism. The tourism industry not being my specialty, and the “social media” aspects having been thoroughly covered by colleagues,Media Technology old and new I had been specifically asked to convey a big picture view.

I chose to address some overall issues related to ICT (information & communication technology), innovation and society by debunking the following five myths:

  1. Ignoring the digital transformation is possible
  2. Technological progress is linear
  3. Connectivity is a given
  4. Virtual vs. “real” life
  5. Big Data – the answer to all our questions

Each of these points would deserve an treatise on its own, and I will not be able to go into much details in the scope of this article. I nevertheless wanted to share some of the links and references mentioned during my lecture and related to these issues. If you prefer reading the whole thing in French, please go to Enjeux technologiques et sociaux: cinq idées reçues à propos du numérique, which is the corresponding (but not literally translated) article in French.

Myth no. 1: Ignoring the digital transformation is possible

While discussions of online social networks have become mainstream, the digital transformation goes way beyond social media. It is about more than visible communication. It is about automation, computation, and algorithms. And as I have written before: algorithms are more than a technological issue because they involve not only automated data analysis, but also decision-making. In 1961 already, C.P. Snow said:

«Those who don’t understand algorithms, can’t understand how the decisions are made.»

In order to illustrate the vastness of computation and algorithmic automation I mentioned Frédéric Kaplan’s information mushroom (“champignon informationnel”), my explorations of Google Autocomplete, as well as the susceptibility of a job to be made redundant in the near future by machine learning and mobile robotics (cf. this scientific working paper, or the interactive visualisation derived from it).

Myth no. 2: Technological progress is linear

This point included a little history including sociology of knowledge and innovation studies.

Continue reading

[French] Enjeux technologiques et sociaux: 5 idées reçues à propos du numérique

Exceptionally, this article is in French. English speaking readers might want to head over to Technology, innovation and society: five myths debunked.

Cet article esquisse mon intervention dans un module de formation EMBA / CAS il y a quelques jours. Le but était de sensibiliser les participants aux enjeux des technologies de l’information comme sources d’innovations majeures et de les rendre attentifs à quelques enjeux sociaux des TIC. Afin qu’un tour d’horizon aussi vaste soit un tant soit peu digeste, j’ai décidé de le présenter en cinq chapitres qui démontent certaines idées reçues à propos du numérique:

  1. Il est possible d’ignorer le numérique
  2. Le progrès technologique est linéaire
  3. La connectivité est un acquis
  4. Il y a le virtuel et il y a la “vraie vie”
  5. Les “big data”: la solution à tout

En voici ci-dessous la présentation, et ensuite quelques phrases explicatives avec liens/références.

La présentation:

Idée reçue no. 1: Il est possible d’ignorer le numérique

Le domaine du numérique est souvent considéré uniquement dans une perspective communication/marketing, une perspective parfois réduite aux seuls sujets des sites web et des réseaux sociaux en ligne. Et alors qu’il est possible pour une entreprise notamment de se passer d’une page facebook en toute cohérence avec sa stratégie, il n’en est pas de même avec la dynamique et l’évolution numérique au sens large. Ce parce que la révolution numérique ne concerne de loin pas que les “social media”. Elle comprend toute sorte d’automatisation algorithmique. Une citation parlante à ce sujet a été dit par C.P. Snow en 1961 déjà et je l’avais reprise dans un billet précédent (en anglais) il y a deux ans et demi:

«Those who don’t understand algorithms, can’t understand how the decisions are made.»

Illustrant quelques enjeux d’automatisation algorithmique, j’ai mentionné le “champignon informationnel” de Frédéric Kaplan, mes explorations de Google Autocomplete, et les calculs de la “probabilité de remplaçabilité” d’un emploi (provenant d’un working paper scientifique, transformés en visualisation interactive) grâce aux avancées dans les domaines du machine learning et de la robotique mobile.

Idée reçue no. 2: Le progrès technologique est linéaire

Pour ce point, une petite plongée dans la sociologie de la connaissance et de la technologie:

Continue reading

Google’s autocompletion: algorithms, stereotypes and accountability

Google autocompletion algorithms questions xkcd

“questions” by xkcd

Women need to be put in their place. Women cannot be trusted. Women shouldn’t have rights. Women should be in the kitchen. …

You might have come across the latest UN Women awareness campaign. Originally in print, it has been spreading online for almost two days. It shows four women, each “silenced” with a screenshot from a particular Google search and its respective suggested autocompletions.

Researching interaction with Google’s algorithms for my phd, I cannot help but add my two cents and further reading suggestions in the links …

Google's sexist autocompletion UN Women

Women should have the right to make their own decisions

Guess what was the most common reaction of people?

They headed over to Google in order to check the “veracity” of the screenshots, and test the suggested autocompletions for a search for “Women should …” and other expressions. I have seen this done all around me, on sociology blogs as well as by people I know.

In terms of an awareness campaign, this is a great success.

And more awareness is a good thing. As the video autofill: a gender study concludes “The first step to solving a problem is recognizing there is one.” However, people’s reactions have reminded me, once again, how little the autocompletion function has been problematized, in general, before the UN Women campaign. Which, then, makes me realize how much of the knowledge related to web search engine research I have acquired these last months I already take for granted… but I disgress.

This awareness campaign has been very successful in making people more aware of the sexism in our world Google’s autocomplete function.

Google's sexist autocompletion UN Women

Women need to be seen as equal

Google’s autocompletion algorithms

At DH2013, the annual Digital Humanities conference, I presented a paper I co-authored with Frederic Kaplan about an ongoing research of the DHLab about Google autocompletion algorithms. In this paper, we explained why autocompletions are “linguistic prosthesis”: they mediate between our thoughts and how we express these thought in (written) language. So do related searches, or the suggestion “Did you mean … ?” But of all the mediations by algorithms, the mediation by autocompletion algorithms acts in a particularly powerful way because it doesn’t correct us afterwards. It intervenes before we have completed formulating our thoughts in writing. Before we hit ENTER. Continue reading

Digital Film Marketing: more than marketing

Film Industry Marketing free admissionRecently I have had the pleasure of speaking at Digital Film Marketing 2, a one-day seminar organised by FOCAL for Swiss Film Industry Professionals. Not only speaking, but contributing to a better understanding of the digital landscape by answering many questions during the whole day. If you know where I come from, it will not surprise you that I loved it.

There were many interesting discussions and, to my great pleasure, a growing awareness of the lack of knowledge a major part of the film industry has had in digital matters. (And strategy.) One day wasn’t enough time, but it was a solid beginning of a great conversation and much-needed mind-shift. Above all: a common mind-shift… much needed, too, it seems: the few attendees who have a great affinity to both cinema and the social web expressed “a sense of relief” after the seminar because they would feel “less alone”.

Why did I entitle my presentation “Digital Film Marketing: more than marketing”? Because my three key messages were the following: Continue reading