«Did Google Manipulate Search for [presidential candidate]?» was the title of a video that showed up in my facebook feed. In it, the video host argued that upon entering a particular presidential candidate’s name into Google’s query bar, very specific autocomplete suggestions are not showing up although – according to the host – they should.
I will address the problems with this claim at a later point, but let’s start by noting that the argument was quickly picked up (and sometimes transformed) by blogs and news outlets alike, inspiring titles such as «Google searches for [candidate] yield favorable autocomplete results, report shows», «Did [candidate]’s campaign boost her image with a Google bomb?», «Google is manipulating search results in favor of [candidate]», and «Google Accused of Rigging Search Results to Favor [candidate]». (The perhaps most accurate title of the first wave of reporting is by the Washington Times, stating «Google accused of manipulating searches, burying negative stories about [candidate]».)
I could not help but notice the shift of focus from Google Autocomplete to Google Search results in some of the reporting, and there is of course a link between the two. But it is important to keep in mind that manipulating autocomplete suggestions is not the same as manipulating search results, and careless sweeping statements are no help if we want to understand what is going on, and what is at stake – which is what I had set out to do for the first time almost four years ago.
Indeed, Google Autocomplete is not a new topic. For me, it started in 2012, when my transition from entrepreneurship/consultant into academia was smoothed by a temporary appointment at the extremely dynamic, innovative DHLab. My supervising professor was a very rigorous mentor all while giving me great freedom to explore the topics I cared about. Between his expertise in artificial intelligence and digital humanities and my background in sociology, political economy and information management we identified a shared interest in researching Google Autocomplete algorithms. I presented the results of our preliminary study in Lincoln NE at DH2013, the annual conference of Digital Humanities. We argued that autocompletions can be considered “linguistic prosthesis” because they mediate between our thoughts and how we express these thought in written language. Furthermore, we underlined how mediation by autocompletion algorithms acts in a particularly powerful way because it intervenes before we have completed formulating our thoughts in writing and may therefore have the potential to influence actual search queries. A great paper by Baker & Potts, published in 2013, has come to the same conclusion and questions “the extent to which such algorithms inadvertently help to perpetuate negative stereotypes“.
Back to the video and its claim that, upon entering a particular presidential candidate’s name into Google’s query bar, very specific autocomplete suggestions are not showing up although they should. But why should they show up? The explanation Continue reading