Wednesday, November 6, 2013

Is Google autocomplete evil?


BBC
Future
6 November 2013

Is Google autocomplete evil?

  • The wit and wisdom of Google
    A UN campaign recently used Google autocomplete to argue that sexism is prevalent...but what does the tool really tell us about how we think? (Memac Ogilvy & Mather Dubai)
The way Google finishes our sentences during internet searches is corrupting our thoughts, says Tom Chatfield.
“Women shouldn’t have rights.” “Women shouldn’t vote.” “Women shouldn’t work.” How prevalent are these beliefs? According to a recent United Nations campaign, such sexism is dispiritingly common, and it’s why they published these sentiments on a series of posters. The source? These statements were the top suggestions offered by Google’s “instant” search tool when the words “Women shouldn’t…” were typed into its search box.
Google Instant is an “autocomplete” service – which, as the name suggests, automatically suggests letters and words to complete a query, based on the company’s knowledge of the billions of searches performed across the world each day. If I enter the words “women should,” the number one suggestion on my own screen is “women shoulder bags,” followed by the more depressing “women should be seen and not heard.” If I type “men should”, the enigmatic suggestion “men should weep” pops up.
The argument behind the UN campaign is that this algorithm offers a glimpse into our collective psyche – and a disturbing one at that. Is this really true? Not in the sense that the campaign implies. Autocomplete is biased and deficient in many ways, and there are dangers ahead if we forget that. In fact, there is a good case that you should switch it off entirely.
Like many of the world’s most successful technologies, the mark of autocomplete’s success is how little we notice it. The better it’s working, the more seamlessly its anticipations fit in with our expectations – to the point where it’s most noticeable when something doesn’t have this feature, or Google suddenly stops anticipating our needs. The irony is that the more effort that’s expended making the results appear so seamless, the more unvarnished and truthful the results feel to users. Knowing what “everyone” thinks about any particular issue or question simply means starting to type, and watching the answer write itself ahead of our tapping fingers.

Yet, like any other search algorithm, autocomplete blends a secret sauce of data points beneath its effortless interface. Your language, location and timing are all major factors in results, as are measures of impact and engagement – not to mention your own browsing history and the “freshness” of any topic. In other words, what autocomplete feeds you is not the full picture, but what Google anticipates you want. It’s not about mere truth; it’s about “relevance”.
This is before you get on to censorship. Understandably, Google suppresses terms likely to encourage illegality or materials unsuitable for all users, together with numerous formulations relating to areas like racial and religious hatred. The company’s list of “potentially inappropriate search queries” is constantly updated.
False premise
None of this should be news to savvy web users. Yet many reactions to the UN Women campaign suggest, to me, a reliance on algorithmic expertise that borders on blind faith. “The ads are shocking,” explained one of the copywriters behind them, “because they show just how far we still have to go to achieve gender equality.” The implication is that these results are alarming precisely because they are impartial: unambiguous evidence of prejudice on a global scale. Yet, while the aim of the campaign is sound, the evidence from Google is far from straightforward.
The greatest danger, in fact, is the degree to which an instantaneous answer-generator has the power not only to reflect but also to remould what the world believes – and to do so beneath the level of conscious debate. Autocomplete is coming to be seen as a form of prophecy, complete with a self-fulfilling invitation to click and agree. Yet by letting an algorithm finish our thoughts, we contribute to a feedback loop that potentially reinforces untruths and misconceptions for future searchers.
Consider the case of a Japanese man who, earlier this year, typed his name into Google and discovered autocomplete associating him with criminal acts. He won a court case compelling the company to modify the results. The Japanese case echoed a previous instance in Australiawhere, effectively, the autocomplete algorithm was judged to be guilty of libel after it suggested the word “bankrupt” be appended to a doctor’s name. And there are plenty of other examples to pick from.
So far as Google engineers are concerned, these are mere blips in the data. What they are offering is ever-improving efficiency: a collaboration between humans and machines that saves time, eliminates errors and frustrations, and enriches our lives with its constant trickle of data. All of which is true – and none the less disturbing for all that.
As the company’s help page puts it, “even when you don’t know exactly what you’re looking for, predictions help guide your search.” Google has built a system that claims to know not only my desires, but humanity itself, better than I could ever manage – and it gently rams the fact down my throat every time I start to type.
Did you know you can turn autocomplete off just by changing one setting? I’d recommend you give it a try, if only to perform a simple test: does having a computer whispering in your ear change the way you think about the world? Or, of course, you can ask Google itself. For me, typing “is Google autocomplete...” offered the completed phrase “is Google autocomplete a joke?”. Unfortunately, the answer is anything but.
If you would like to comment on this article or anything else you have seen on Future, head over to our Facebook page or message us on Twitter.

No comments:

Post a Comment