This is it: the study I had been working on all winter (together with my colleague Marcello and our professor Effy Vayena) was published in Nature Machine Intelligence. It is an in-depth review of a corpus of 84 documents consisting of (or containing) ethical principles for artificial intelligence. Although no single principles occurred in all documents, some are more prevalent than others — and others are strikingly underrepresented.
Here is a link to the article “The global landscape of AI ethics guidelines”: https://www.nature.com/articles/s42256-019-0088-2. Unfortunately it is behind a paywall (and we were not able to select the option of having the article published Open Access), but if you get in touch via e-mail (anna.jobin at sociostrategy), on Social Media, or via ResearchGate, I will be more than happy to send you the article. (*)
This is what the abstract says:
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
On twitter I have given a little more information about our findings in a short thread:
There are more tweets, and if you click on the date link you should be able to acces the whole thread.
Although we analyzed 84 documents, many more AI principles and ethics guidelines exist today. For one, there is the time difference between the time one submits the first version of an article to a journal and the moment it is published (peer-review and production take time, and I would like to add that NMI has been much faster than I, a qualitative social scientist, have been used to from other experiences). But there is also another catch-22, due to our research design: our in-depth analysis takes time, and while we were analyzing new guidelines, even more principles would be issued during that time. At one point we simply had to wrap up… This will also explain why our analysis only takes into account the version of the documents our methodology provided us with, and does not account for subsequent versions (the Montreal Declaration, for example, was in stakeholder consultation stage so our analysis is not about its final version).
Therefore, and for methodological reasons, we are only able to provide a snapshot in time. Yet we hope that our research can serve as an overview and a stepping stone for anyone involved with “ethical AI”, from researchers and scholars to technology developers to policy makers.
(*) FWIW we did post a pre-print version on arXiv.org, though I am compelled to highlight that the arXiv version is not identical with the NMI journal version: it is our author version, before peer-review, and in addition to the clarifying modifications we were able to make in the final version thanks to the reviewer comments, one document was initially wrongly attributed to the UK instead of the USA (something we were able to correct thanks to a generous reader comment).
Pingback: Why Dr. Timnit Gebru is important for all of us | Sociostrategy
Pingback: 11 links to dive into the ethics of artificial intelligence | Sociostrategy
Pingback: Ein unveröffentliches Interview zu KI [in German] | Sociostrategy