This is it: the study I had been working on all winter (together with my colleague Marcello and our professor Effy Vayena) was published in Nature Machine Intelligence. It is an in-depth review of a corpus of 84 documents consisting of (or containing) ethical principles for artificial intelligence. Although no single principles occurred in all documents, some are more prevalent than others — and others are strikingly underrepresented.
In the past five years, private companies, research institutions and public sector organizations have issued principles and guidelines for ethical artificial intelligence (AI). However, despite an apparent agreement that AI should be ‘ethical’, there is debate about both what constitutes ‘ethical AI’ and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analysed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted, why they are deemed important, what issue, domain or actors they pertain to, and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.
On twitter I have given a little more information about our findings in a short thread:
There are more tweets, and if you click on the date link you should be able to acces the whole thread.
Although we analyzed 84 documents, many more AI principles and ethics guidelines exist today. For one, there is the time difference between the time one submits the first version of an article to a journal and the moment it is published (peer-review and production take time, and I would like to add that NMI has been much faster than I, a qualitative social scientist, have been used to from other experiences). But there is also another catch-22, due to our research design: our in-depth analysis takes time, and while we were analyzing new guidelines, even more principles would be issued during that time. At one point we simply had to wrap up… This will also explain why our analysis only takes into account the version of the documents our methodology provided us with, and does not account for subsequent versions (the Montreal Declaration, for example, was in stakeholder consultation stage so our analysis is not about its final version).
Therefore, and for methodological reasons, we are only able to provide a snapshot in time. Yet we hope that our research can serve as an overview and a stepping stone for anyone involved with “ethical AI”, from researchers and scholars to technology developers to policy makers.
(*) FWIW we did post a pre-print version on arXiv.org, though I am compelled to highlight that the arXiv version is not identical with the NMI journal version: it is our author version, before peer-review, and in addition to the clarifying modifications we were able to make in the final version thanks to the reviewer comments, one document was initially wrongly attributed to the UK instead of the USA (something we were able to correct thanks to a generous reader comment).
Today, I could write something very similar regarding the headlinesinforming us that, according to a recent study, Google’s advertising algorithms discriminate against women. And it is probably a handy opportunity to let you know that my phd research in social sciences – still ongoing – is precisely about interaction with Google’s advertising algorithms…
However, this blog post is not going to be about my research. But when I saw the headlines about “discriminating advertising algorithms” I simply couldn’t *not* blog about it.
Luckily, WIRED has already taken care of asking the very same question I asked in my 2013 blog post about Google’s autocompletion algorithms: who or what is to blame? In a short but discerning piece WIRED explains the complex configuration of Google AdSense:
Who—or What’s—to Blame?
While the study’s findings would suggest Google is enabling discrimination, the situation is much more complicated.
Currently, Google allows advertisers to target their ads based on gender. That means it’s possible for an advertiser promoting high-paying job listings to directly target men. However, Google’s algorithm may have also determined that men are more relevant for the position and made the decision on its own. And then there’s the possibility that user behavior taught Google to serve ads in this manner. It’s impossible to know if one party here is to blame or if it’s a combination of account targeting from all sources at play.
This configuration has allowed powerful companies to present their services as ‘platforms’, phenomenal and simultaneously neutral vessels of communication filled by numerous individual users’ actions only. Continue reading →
Women need to be put in their place. Women cannot be trusted. Women shouldn’t have rights. Women should be in the kitchen. …
You might have come across the latest UN Women awareness campaign. Originally in print, it has been spreading online for almost two days. It shows four women, each “silenced” with a screenshot from a particular Google search and its respective suggested autocompletions.
Researching interaction with Google’s algorithms for my phd, I cannot help but add my two cents and further reading suggestions in the links …
Women should have the right to make their own decisions
Guess what was the most common reaction of people?
They headed over to Google in order to check the “veracity” of the screenshots, and test the suggested autocompletions for a search for “Women should …” and other expressions. I have seen this done all around me, on sociologyblogs as well as by people I know.
In terms of an awareness campaign, this is a great success.
And more awareness is a good thing. As the video autofill: a gender study concludes “The first step to solving a problem is recognizing there is one.” However, people’s reactions have reminded me, once again, how little the autocompletion function has been problematized, in general, before the UN Women campaign. Which, then, makes me realize how much of the knowledge related to web search engine research I have acquired these last months I already take for granted… but I disgress.
This awareness campaign has been very successful in making people more aware of the sexism in our world Google’s autocomplete function.
Women need to be seen as equal
Google’s autocompletion algorithms
At DH2013, the annual Digital Humanities conference, I presented a paper I co-authored with Frederic Kaplan about an ongoing research of the DHLab about Google autocompletion algorithms. In this paper, we explained why autocompletions are “linguistic prosthesis”: they mediate between our thoughts and how we express these thought in (written) language. So do related searches, or the suggestion “Did you mean … ?” But of all the mediations by algorithms, the mediation by autocompletion algorithms acts in a particularly powerful way because it doesn’t correct us afterwards. It intervenes before we have completed formulating our thoughts in writing. Before we hit ENTER. Continue reading →
Science and research, particle physics and astronomy, talks and music… On Friday, I had the honour and pleasure of spending a very special day at CERN: a guided visit plus TEDxCERN.
For those of you who are not familiar with this acronym: it stands for Conseil Européen pour la Recherche Nucléaire and has been a bit in the news lately for discovering a new particle, most probably the Higgs boson (cf. understandable explanations of the Higgs boson).
Back to my very special day: the morning was dedicated to visiting the impressive CMS experiment facility, and in the afternoon, the TEDx conference took place. We were welcomed in a tent, but the actual event was held in the beautiful Globe of Science and Innovations.
It was not my first TEDx experience, and I enjoyed the scientific emphasis. Below, I will share my personal thoughts and highlights, but would like to underline that the whole program was on a very high level.
Science and marketing
One of my favourite catch phrase comes from entertaining Marc Abrahams and goes more or less like this:
If you do research and you know what you are going to find, you’re not doing research – you’re doing marketing.
Unsurprising for a science-heavy program, there were several speakers sharing their journey from not knowing to actual findings:
Maya Tolstoy, a marine geophysicist with an impressive track record, spoke about her noticing oddly recurrent signals in her data – which would then pave the way for the discovery of correlations between tides and seafloor seismicity. Theoretical physicist Gian Giudice showed the impact of the discovery of the Higgs boson on the calculation of the stability of the universe. (Bad news, by the way: with Giudice’s current premises, the calculations reveal a highly unstable universe; however, it seems we don’t have to worry since our sun will blow up anyway before anything happens to our universe.)
And cosmologist Hiranya Peiris, wonderfully starting off the TEDxCERN talks with a whodunnit about the beginning of the universe, reminded us that all research – even when not revealing a ground-breaking discovery – trumps never leaving the point of not knowing:
“Looking and not finding is not the same as not looking.” (H. Peiris)
Science and resources
The focus of TEDxCERN was, accordingly, not only on the outcome of research, but also on science itself, and on the very importance of enabling and undertaking research:
Computer scientist Ian Foster, who is to be credited with the analogy between research and journey (and, accessorily, grid computing), explained very well how an “ocean liner” such as CERN may be best adapted for certain kinds of research-journeys – but not for all kinds of research-journeys. And scientists who are not aboard an ocean liner (but a sailboat, for example) need to get ahead, too…
“Today, a person can run a company from a coffee shop thanks to cloud computing… what about labs?” (I. Foster)
He presented many cloud platforms empowering small-scale labs and researchers, notably Globus online which allows scientists to focus on the data content rather than data storing, sharing and maintenance.
On the other hand, TED veteranLee Cronin stressed the need for his field, chemistry, to advance not only in sailboats, but to engage and collaborate on ocean liner scale in order to discover the origin of life.
Science and tomorrow’s scientists
Accidentally or not, two of the most personal talks were dedicated to the situation of young academics – although each one from a very different viewpoint: Becky Parker is a teacher of physics and astronomy at Simon Langton School, acting along the lines of the “radical” idea that interest in science can be sown and supported by engaging students in actual scientific projects. LUCID proves her right. Her innovative approach and personal enthusiasm has triggered many “I wish I had had a teacher like her” thoughts and tweets.
In a society where Becky Parkers are an exception rather than a rule, insatiable curiosity and personal experience may make up for a lack of intellectual stimulation in school: Brittany Wenger began studying neural networks (by herself!) when she was just 13 years old, learnt programming and is now providing Cloud4Cancer, a service to detect breast cancer less invasively than standard methods.
The special guest scheduled right after Brittany Wenger’s talk, Will.I.am, also advocated for young scientists: on direct via webcam, he explained why he is fascinated by science and why he encourages young people, no matter what neighbourhood they grow up in, to learn about science and programming. He underlined the importance of engaging every kid in education and science, no matter their background.
(Unfortunately, I spotted some of the white grey-haired men in the public frown upon hearing a black musician (read: “non-scientist”) talking about science in his own words – kudos to the TEDxCERN curators for not sharing this elitist mindset.)
Science and collaboration
SESAME: transnational scientific cooperation
A propos science and elitism: astronomer Chris Lintott‘s talk was a perfect illustration of the benefits scientist can gain from considering laypeople as a complement rather than an opposition. His Zooniverse, regrouping citizen science projects, makes for a great POC of collaborative and/or crowdsourced science.
Collaboration of another kind is at the heart of SESAME, presented by Zehra Sayers and Eliezer Rabinovici: much like CERN has been a unique transeuropean venture in Europe post world war II, SESAME is a unique undertaking and aspires international cooperation across cultural and political divides through first-class science in the Middle East.
Science and subjectivity
By affinity, I guess – I am a sociologist – the talks addressing objectivity/subjectivity in science and research were the talks I personally liked best:
John Searle explained that ignoring consciousness was science’s biggest fallacy, which contributed to upholding, unfortunately, the wrong dichotomy of objective science as opposed to subjective consciousness. He argued for the objectivity in subjectivity (and vice versa!) and, accessorily, trashed behaviorism. Which makes me think: for subsequent editions of TEDxCERN, it would be a great addition to give more room to research about science.
Many of the examples mentioned in Londa Schiebinger‘s talk were a perfect illustration of how objectivity and subjectivity co-exist – and thus why science and innovation need to be inclusive of diversity in subjectivity in order to be as objective as possible.
“Gender bias in society create gender bias into knowledge.” (L. Schiebinger)
(For instance: childless urban planners modelled people’s movements by categorising each trip as “work”, “shopping”, “leisure”, “visits” etc. This might work for them. However, for people with care obligations who often zig-zag around the city – bring one child to school, the other one to day-care, and pass by the dry-cleaners etc. … all this on their way to work – single, finite categories for each trip simply didn’t work. For more examples and resources cf. Schiebinger’s project Gendered Innovations at Stanford.)
Science and soprano (and other music)
Listening to Maria Ferrante singing about galaxies and C8H10N4O2 was pure delight and fit the overall program very well. So did the re-edition of the first interplanetary transmitted song Reach for the stars, performed by Collège International de Ferney-Voltaire Choir and International School of Geneva Chorus. Yaron Herman and Bijan Chemirani played together at the very end of TEDxCERN. I remembered Yaron Herman from when he played at TEDxHelvetia at EPFL, a few months ago, where he also shared his fascinating story. A pleasure listening to him again, especially in harmony with Bijan Chemirani.
Last but not least
I need to mention geneticist George Church‘s talk, but I am not embarrassed to admit that I was not able to follow everything he said. What I understood and recall: DNA bears immense potential; transdisciplinary research is the future.
Big thanks to CERN, the TEDxCERN team and everyone else involved for a well-curated, diverse yet coherent program. Thanks to the speakers for making me think, and laugh.
By the way: another account of the TEDxCERN day can be found on TEDxCERN volunteer Alex Brown’s blog.
Oh, and you might want to have a look at the TED Ed videos co-produced with CERN. My favorites:
Of course there are many ways sociology can contribute to a better understanding of what is happening online: the field is vast, and so is the number of experts and studies. This blogpost has become a series and is – more or less – an English translation of a presentation I have recently given in French, picking up a few of the theoretical frameworks which illustrate the impact of social media on the way we do business… and on our lives in general. Continue reading →