Google’s autocompletion: algorithms, stereotypes and accountability

Google autocompletion algorithms questions xkcd

“questions” by xkcd

Women need to be put in their place. Women cannot be trusted. Women shouldn’t have rights. Women should be in the kitchen. …

You might have come across the latest UN Women awareness campaign. Originally in print, it has been spreading online for almost two days. It shows four women, each “silenced” with a screenshot from a particular Google search and its respective suggested autocompletions.

Researching interaction with Google’s algorithms for my phd, I cannot help but add my two cents and further reading suggestions in the links …

Google's sexist autocompletion UN Women

Women should have the right to make their own decisions

Guess what was the most common reaction of people?

They headed over to Google in order to check the “veracity” of the screenshots, and test the suggested autocompletions for a search for “Women should …” and other expressions. I have seen this done all around me, on sociology blogs as well as by people I know.

In terms of an awareness campaign, this is a great success.

And more awareness is a good thing. As the video autofill: a gender study concludes “The first step to solving a problem is recognizing there is one.” However, people’s reactions have reminded me, once again, how little the autocompletion function has been problematized, in general, before the UN Women campaign. Which, then, makes me realize how much of the knowledge related to web search engine research I have acquired these last months I already take for granted… but I disgress.

This awareness campaign has been very successful in making people more aware of the sexism in our world Google’s autocomplete function.

Google's sexist autocompletion UN Women

Women need to be seen as equal

Google’s autocompletion algorithms

At DH2013, the annual Digital Humanities conference, I presented a paper I co-authored with Frederic Kaplan about an ongoing research of the DHLab about Google autocompletion algorithms. In this paper, we explained why autocompletions are “linguistic prosthesis”: they mediate between our thoughts and how we express these thought in (written) language. So do related searches, or the suggestion “Did you mean … ?” But of all the mediations by algorithms, the mediation by autocompletion algorithms acts in a particularly powerful way because it doesn’t correct us afterwards. It intervenes before we have completed formulating our thoughts in writing. Before we hit ENTER.

Thus, the appearance of an autocompletion suggestion during the search process might make people decide to search for this suggestion although they didn’t have the intention to. A recent paper by Baker and Potts (2013) consequently questions “the extent to which such algorithms inadvertently help to perpetuate negative stereotypes“:

It is not possible to know how many people have typed in stereotyping questions about various social groups, and we do not know if such people represent the majority in a population. As noted above, we would guess that actual numbers of people asking such questions are relatively low, but those who do, tend to ask the stereotyping ones. However, even if it emerges that many people are interested in such questions and click on the auto-suggestions that appear, is there an over-riding moral imperative to remove these auto-suggestions?

Google’s autocompletion has been around for quite some time (almost 9 years to be exact, although the official roll-out was in 2008 only). According to the company, the function suggests what it deems “useful queries” (without defining “useful”, bien sûr) to users “by analyzing a variety of characteristics of your custom search engine”. The volume of searches for a specific search term (from different locations) seems to be the main determinate. But there is no reason to believe that suggestions aren’t, to some degree, personalized (the way search results are). And: autocompletion isn’t entirely automated. Google influences (“censors”, some say) autocompletion globally and locally through hardcoding, be it for commercial, legal or puritarian reasons. (Bing does so, too.)

Google's sexist autocompletion UN Women

Women cannot accept the way things are

There is no “veracity” to be established because Google is not the objective mirror it claims to be. Google is a company that works however it chooses. Instead of veracity, let’s focus on accountability. Because Google, the very existence of Google, as well as the specific way it works, is having an impact on our lives.

Who is in charge when algorithms are in charge?

I am not implying the negative stereotyped search term suggestions about women are Google’s intent – I rather suspect a coordinated bunch of MRAs are to be blamed for the volume of said search terms – but that doesn’t mean Google is completely innocent. The question of accountability goes beyond a binary option of intentionality or complete innocence.

Unsurprisingly, Google doesn’t take any responsiblity. It puts the blame on its own algorithms … as if the algorithms were beyond the company’s control.

The Spiegel wrote (about another autocompletion affair):

The company maintains that the search engine only shows what exists. It’s not its fault, argues Google, if someone doesn’t like the computed results. […]
Google increasingly influences how we perceive the world. […] Contrary to what the Google spokesman suggests, the displayed search terms are by no means solely based on objective calculations. And even if that were the case, just because the search engine means no harm, it doesn’t mean that it does no harm.

If we, as a society, do not want negative stereotypes (be they sexist, racist, ablist or otherwise discriminatory) to prevail in Google’s autocompletion, where can we locate accountability? With the people who first asked stereotyping questions? With the people who asked next? Or with the people who accepted Google’s suggestion to search for the stereotyping questions instead of searching what they originally intended? What about Google itself? …

Google's sexist autocompletion UN Women

Women shouldn’t suffer from discrimination anymore

Of course, algorithms imply automation. And digital literacy helps understanding the process of automatation – I have been saying this before – but Algorithms are more than a technological issue: they involve not only automated data analysis, but also decision-making (cf. “Governing Algorithms: A Provocation Piece” #21. No, actually you should not only read #21 but the whole, very thoughtprovoking provokation piece!). Which makes it impossible to ignore the question whether algorithms can be accountable.

In a recent Atlantic article, advocating reverse engineering, N. Diakopoulos asserts:

[…] given the growing power that algorithms wield in society it’s vital to continue to develop, codify, and teach more formalized methods of algorithmic accountability.

Which I think would be a great thing because, at the very least, this will raise awareness. (I don’t agree that “algorithmic accountability” can be assigned à priori, though). But when algorithms are not accountable, then who is? The people/organization/company creating them? The people/organization/company deploying them? Or the people/organization/company using them? This brings us back to the conclusion that the question of accountability goes beyond a binary option of intentionality or complete innocence… which makes the whole thing an extremely complex issue.

Who is in charge when algorithms are in charge?

PS:
Oh, and of course algorithms are not simply “bad”. The proof: Google Autocomplete can also produce nice things, e.g. poetry. ;)

21 thoughts on “Google’s autocompletion: algorithms, stereotypes and accountability

  1. doughill50

    “Who is in charge when algorithms are in charge?”

    Great question. I have a chapter devoted to these issues in my new book, “Not So Fast: Thinking Twice About Technology.” One quote in that chapter I like a lot is from the philosopher Georges Canguilhem: “A model only becomes fertile by its own impoverishment.”

    Reply
  2. Stéphane V.

    Merci, Anna, article très stimulant. Néanmoins, la responsabilité des idées n’est-elle pas toujours celle des êtres humains ? La publicité, les médias, les syndicats, la culture ou encore l’auto complétion Google sont des facteurs d’influence, tout à fait, et grâce à toi nous prenons conscience de l’importance de ce dernier. Mais au final, chacun est responsable de ce qu’il accepte de penser, non ? Ce serait intéressant de démontrer si les recherches auto-complétées via Google ont une influence réelle sur les opinions ou non. Peut-être que c’est comme les slogans publicitaires : les gens les lisent mais n’y croient pas… Qu’en dis-tu ?

    Reply
  3. Pingback: Google autocomplete | digithek blog

  4. peter

    I think one of the biggest problems with the design/use of the internet is the assumption that it’s good to find things that are similar to you; everything is designed to reinforce your past likes/activities, which makes it easier to sell you things but also reinforces differences between different groups of people.

    I also think it’s weird how much people’s attitudes to search/research have changed, and some responsibility has to be placed on users acting like there’s only one search engine/strategy out there. When I was in high school 10 years ago the internet was BAD as a research source, and at the least you had to find material on some kind of trusted/documented site. Now people don’t even do that with their everyday activities, which sounds less significant, but is probably affecting more than a high school research paper. It would be better to jump between searching on a variety of engines, different websites, twitter, different news sites, etc, but nobody really does that simply because of convenience.

    Reply
  5. Adrian Kuhn (@akuhn)

    The interesting question to me is public perception of algorithms and how people are trying to make sense of algorithmic outputs, that even the engineers who build them do not understand. Betcha the Google engineers who built autocomplete cannot explain why these sentences show up there. Is it really people with stereotypes typing in these questions (as the UN want us to believe) or maybe rather people worried about stereotypes? Or was it just a line from a song that when not taken out of context meant something else? We don’t know! To people without a technical background what is their mental model of query formulation and their mental model of how the algorithm is working?

    Reply
  6. rebeccambs

    “This awareness campaign has been very successful in making people more aware of [the sexism in our world] Google’s autocomplete function”

    —-YES! This sums it up nicely! The goals of the initial, intended messages are essentially obscured. And I agree with your criticism of Google not being an “objective mirror” – I think most people would as well, with little convincing (thus I’m not sure that the majority of readers/commentators who have engaged with this campaign actually implied that Google WAS an objective reflection of reality… though maybe I am naive in assuming that!).

    nice post

    Reply
  7. Phil Shankland

    “just because the search engine means no harm, it doesn’t mean that it does no harm.”

    I don’t mean any harm to Indian workers in a collapsed sweatshop yet if I adopt the same consumer buying habits as others that may well be the result. This is an ethical question. It is disingenuous (of course) for Google to absolve themselves of responsibility on the grounds that they are mirroring search behaviour. The reason they do so (of course) is because they are making money and it is not in their interest to self-regulate. I would be interested to know if Google ‘fiddled’ autocompletes that were critical of the company. I note that if I enter “Google is …” second on the autocompletes is ‘evil’.

    All power corrupts. Absolute power corrupts absolutely. Lord Acton was way ahead of the algorithm.

    Reply
  8. Pingback: Proxem » La lettre du 28 octobre : Google n’est pas le « simple reflet » du web

  9. Pingback: Four short links: 22 November 2013 - O'Reilly Radar

  10. Pingback: Pasta&Vinegar » Blog Archive » #curiousalgorithms

  11. Pingback: Kim Kardashian’s Marriage by Sam Riviere | Sabotage

  12. Pingback: Algo-ritmi | luciano petullà

  13. Pingback: Algo-rithms. The cultural mediation of algorithms

  14. Pingback: Enjeux technologiques et sociaux: 5 idées reçues à propos du numérique | Sociostrategy

  15. Pingback: Technology, innovation and society: five myths debunked | Sociostrategy

  16. Pingback: Researching advertising algorithms | Sociostrategy

  17. Pingback: Google Autocomplete revisited | Sociostrategy

  18. Pingback: “If a computer is right 99% of the time, I wouldn’t want to be the 1% case” | Sociostrategy

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.