«Did Google Manipulate Search for [presidential candidate]?» was the title of a video that showed up in my facebook feed. In it, the video host argued that upon entering a particular presidential candidate’s name into Google’s query bar, very specific autocomplete suggestions are not showing up although – according to the host – they should.
I will address the problems with this claim at a later point, but let’s start by noting that the argument was quickly picked up (and sometimes transformed) by blogs and news outlets alike, inspiring titles such as «Google searches for [candidate] yield favorable autocomplete results, report shows», «Did [candidate]’s campaign boost her image with a Google bomb?», «Google is manipulating search results in favor of [candidate]», and «Google Accused of Rigging Search Results to Favor [candidate]». (The perhaps most accurate title of the first wave of reporting is by the Washington Times, stating «Google accused of manipulating searches, burying negative stories about [candidate]».)
I could not help but notice the shift of focus from Google Autocomplete to Google Search results in some of the reporting, and there is of course a link between the two. But it is important to keep in mind that manipulating autocomplete suggestions is not the same as manipulating search results, and careless sweeping statements are no help if we want to understand what is going on, and what is at stake – which is what I had set out to do for the first time almost four years ago.
Indeed, Google Autocomplete is not a new topic. For me, it started in 2012, when my transition from entrepreneurship/consultant into academia was smoothed by a temporary appointment at the extremely dynamic, innovative DHLab. My supervising professor was a very rigorous mentor all while giving me great freedom to explore the topics I cared about. Between his expertise in artificial intelligence and digital humanities and my background in sociology, political economy and information management we identified a shared interest in researching Google Autocomplete algorithms. I presented the results of our preliminary study in Lincoln NE at DH2013, the annual conference of Digital Humanities. We argued that autocompletions can be considered “linguistic prosthesis” because they mediate between our thoughts and how we express these thought in written language. Furthermore, we underlined how mediation by autocompletion algorithms acts in a particularly powerful way because it intervenes before we have completed formulating our thoughts in writing and may therefore have the potential to influence actual search queries. A great paper by Baker & Potts, published in 2013, has come to the same conclusion and questions “the extent to which such algorithms inadvertently help to perpetuate negative stereotypes“.
Back to the video and its claim that, upon entering a particular presidential candidate’s name into Google’s query bar, very specific autocomplete suggestions are not showing up although they should. But why should they show up? The explanation offered in the video is based on two arguments: graphs of comparative search volume based on the Google Trends tool and comparison with autocomplete suggestions from the web search engines Yahoo and Bing.
However, Google Trend seems to have the same flaw as statistics: it can be very informative, but if you torture it long enough, it will confess to anything. Rhea Drysdale has published an informative piece that shows very clearly the manipulative nature of (mis-)using Google Trends as «anecdotal evidence» for «two random queries out of literally millions of variations» the way the authors of the video have. I cannot but encourage you to read Drysdale’s article. (One sentence resonates particularly with me because of what I am currently working on: «Let’s see if mainstream media bothers to do their homework or simply picks up this completely bogus story spreading it further.» Previous experiences suggests the later.) She uses other queries and Google Trends to illustrate how a manipulation of search for another candidate could be just as easily “proved” and concludes that there is no manipulation with a political agenda, just Google’s algorithms at work.
Another article by Clayburn Griffin comes to the same conclusion. He reminds us that «Google Autocomplete is more complicated than you think. It’s not as simple as search volume, though that is an important factor.» But Griffin is convinced: «What I’ve seen, and what’s been reported in the video claiming Google is biased, doesn’t look like manipulation to me. It looks like Google Autocomplete working as intended.»
This is where it gets tricky, because I am as glad as the next person to learn more about how Google Autocomplete is intended to work. Then again, for the point I am trying to make there is no need to go into the anecdotical, no need to know which query for which demographic on which particular search engine in fact does or does not prompt a particular autocomplete suggestion. There are also methodological issues because not only are the algorithmic suggestions based on profiling and personalization, but the algorithms are ever changing. More often than not, the focus on single trees has contributed to rendering the forest invisible. (Still, I must underline that within some great research the trees actually help illustrating the forest, or even the macrocosm – and in this regard: how very fitting that just before starting to write this article I’ve seen raving tweets about an ongoing presentation by Safyia Noble. Please check out her excellent work.)
… no manipulation with a political agenda, just Google’s algorithms at work… But no political agenda does not mean not political. “Google Autocomplete working as intended” is necessarily political, for the very simple reason that algorithmic systems are not neutral.
And although there may be indeed no particular candidate or cause favored: the very fact that we presume Google Autocomplete would be able to do so shows the very position of power it holds.
To be very clear, that power is not necessarily one of manipulating people into having one opinion rather than another, but rather a power of agenda setting. In this regard, it is similar to traditional media, which cannot necessarily dictate people what to think but can certainly impact what people think about. The information we are given in form of Google results may affect our opinions on certain topics, but it is Google Autocomplete that may actually influence on what topics we seek out information: the emergence of an autocomplete suggestion during the search process might make people decide to search for this suggestion although they didn’t have the intention to.
Impossible to address power and agenda setting in a digital context without drawing parallels to the controversy around Facebook Trends. Until recently, little was known about the logics that have made a topic “trending” on facebook. And although a few researchers have been addressing the power of “trending” topics and the lack of knowledge about it, it has not necessarily been considered an issue by journalists, politicians or the general public. But when suddenly some of these logics were revealed, discussions about their adequacy, neutrality and transparency have been sparked (and even a Senate’s Committee has gotten involved). Tarleton Gillespie addresses important issues with regard to the Facebook Trends controversy, many of which are just as relevant for Google Autocomplete.
It is not surprising that the dynamics around Google Autocomplete have followed a rather typical pattern: almost no interest whatsoever in how it works although nothing is known; suddenly, by learning something about Autocomplete, people learn that there actually is something to know; then they want to know more; finally, they demand accountability. That “something to know” may have been ignited by the video claiming political manipulation – or rather: re-ignited. In 2013 already, an ad campaign by UN Women has made people more aware of the sexism in our world Google’s autocomplete function.
Of course, knowing more about how Google Autocomplete works is a good starting point, which prevents us from confusing the (potential) manipulation of search queries with the manipulation of search results. As I wrote in 2013, it is interesting to learn that «autocompletion isn’t entirely automated. Google influences (“censors”, some say) autocompletion globally and locally through hardcoding, be it for commercial, legal or puritarian reasons. (Bing does so, too.)» But ultimately, I am not convinced that yet another trial-and-error reverse engineering attempt that reveals whether a particular expression is or is not suggested in a certain context will contribute to a greater overall understanding.
By the way, this is the main reason why a comparison of results/autocomplete suggestions/… between different web search engines has its limits: it will only offer comparative insights (which, admittedly, might reveal some of what could be otherwise) and mainly keeps feeding into the erroneous idea that there is a single, self-explanatory standard of how technology should work.
As long as we hold on to the idea that a fair, neutral search engine (then again: fair and neutral for whom?) is possible and simply defined by the absence of manipulation, we have not understood neither algorithmic systems nor society nor their intersection.
Does even Ggl know how it really works? Let me doubt. The whole problematic is completely biased by one thing: the too big complexity and the rising amount of data who are computed for the result. The “prediction” is mostly based on the laziness of the user and therefore it is only a pitiful shortcut to the reality. Meaning “get a Big Mac” and be quiet. Now what with the fact that we are no more able to deal with that mass of information and only able to be ready to accept biased results? Poor humanity.
I like the sociological approach to autocomplete. Not my field and you’d loose me fast but i like it. Have you ever actually tried (managed?) to modify autocomplete suggests for a given keyword?
Salutations;) david