Around the world there have been a number of lawsuits brought against Google for the content displayed by their autocomplete function. Google has been ordered by Courts to censor autocomplete suggestions if they are defamatory or insulting.
Individuals and businesses have been launching legal actions against Google because they havn’t liked what they have seen appear in the text next to their name or business when Google’s autocomplete feature has predicted what a user was trying to find.
Google’s autocomplete feature was integrated into their search engine in 2009, and automatically generates search suggestions based upon what a user is typing. Put simply, it tries to finish the thought for you as you type.
The autocomplete results are a reflection of the search activity of all web users and the content of web pages indexed by Google, the predictions determined by Google’s algorithm, which among other things, includes the number of search queries entered by other users.
The way autocomplete works is that it takes the search terms which other users have put into the system and figures out which are the most popular searches. In effect, it could be argued that it acts a speed multiplier in that it increases the rate of dissemination of information. It is purely an efficiency mechanism and to ban such a mechanism merely because some unintended effect would be made more efficient by it’s use seems misguided.
Google’s algorithm is constantly being refined as the company’s engineers try to find ways to make their results more relevant and responsive to user queries and filtering out as much spam as possible. Google uses semantic search, the science of meaning in language to produce highly relevant search results rather than simply using ranking algorithms to predict relevancy.
The introduction of semantic search which factors in points of reference like context, location and intent of search, word variations, concept matching, synonyms and natural language queries is all oriented at delivering users content which is relevant, saving users from having to sift laboriously through a list of very loosely related keyword search results. Semantic search tries to understand user intent to generate more meaningful results. If Google are ordered to disable search results and/or tamper with it’s algorithm by courts, the integrity of the algorithm and the relevancy of search could deteriorate.
Contrary to basic search which is built on keywords, semantic search assists internet users to research subjects they may have no knowledge of, by generating a list of documents which are more relevant and targeted, enabling them to gain knowledge about a topic they are researching. Google therefore provide users with a base for further search of topics in more depth, about areas which researchers may not have previously been aware of in their research quests.
The autocomplete software automatically appends words next to your name, providing of course that enough users have searched for the relevant string.
When individuals and organisations sue Google to control the use of their name as it appears in autocomplete results, Google is increasingly being exposed to lawsuits for what are in reality, search queries previously conducted by it’s users.
If one of your names happens to be a word found in the dictionary you are bound to generate a large volume of search results. Most internet users understand that Google’s comprehension (for want of a better word) of what a user types in isn’t necessarily the same as the human understanding of internet searchers.
Google doesn’t make value judgements about indexed material. The search engine just retrieves material relevant to what it predicts a searcher is looking for, indifferent to whether something has positive or negative connotations. This isn’t surprising given that the search results are generated by an algorithm, rather than being manually determined by a human being.
If your name was “John” and you typed in the word John followed by the words “is good”, Google doesn’t restrict itself to merely locating the content which reflects the fact that John is good. It will return swarms of content of all kinds, more so because both words are extremely common ones. The content will range from material about the concept of ‘good’, and other people whose names include either word.
In fact it will locate all sorts of things that are good – besides you!
Roughly speaking, the highest priority will be where “John” appears in close proximity to the word “good”. Therefore if there are large amounts of material indexed saying “Jake is good John is bad” the search for “John is good” the query will return the good and the bad so to speak, because Google’s algorithm would see all words as relevant, and results would appear based on a multitude of factors such as the proximity of the words, their prevalence and their order of sequence.
Google Autocomplete saves time and the frustration of typing, in addition to presenting users with options they may not have thought of or to help them find content to further assess when narrowing their search for information on a given subject. It is up to the user to assess and evaluate the information they locate.
There is a constant battle between black hat SEO experts, Google bombers and Google engineers, with Google trying to keep the results as relevant as possible, with those in the other camp trying to game or manipulate the search engine to steer the unwary to sites they are trying to lure them to.
If googling your name causes some not so complementary text to emerge adjacent to your name, it doesn’t necessarily indicate that there are a lot of users saying not so complementary things about you on the internet. As the example above indicates it could equally mean the exact opposite also.
To alter the above scenario of “John is good”, if there were a famous fraud investigator who had recently completed a highly publicised investigation of a prominent member of a scientology organisation it is likely that John Good’s name might start to return search results which associate his name with words such as “fraud” and “scientology”, depending on how popular his investigation was at any particular time and how many internet searchers were trying to locate it. Internet searches for his work would therefore tend to re-inforce any existing associations.
It is not uncommon for internet users to type in a product, service or a company they are contemplating dealing with followed by the word “scam” to find content about potential problems with that particular service or company. Product and company reviewers and internet authors are all too aware of this and very often take that into consideration, deliberately deploying tags next to trademark names to manipulate search results so that their competitors’ products/services appear in an adverse light. This happens when black hat practices twist predictive searching through semantics to generate distorted results to deliberately cause harm or damage reputations of brands and individuals.
Most internet users are savvy enough to sort through misleading information and make assessments about the credibility of various sources and materials they find. Internet users can usually differentiate between useful and not so useful or legitimate relevant content.
The critical question is whether user access to some content should be censored because a small percentage of the user population may be mislead by results or auto-search suggestions.
Internet users searching for information about a product, person or service of any significant prominence would find it utterly incredulous if they located only positive material, and the absence of any negative commentary or content would be more likely indicate to an experienced user that somebody has been using various options available to them, be it legal or economic, to censor, suppress or relegate negative content about themselves.
Reputation management consultants are frequently hired by companies and powerful individuals to ‘pretty up’ their internet presence. It would be very unusual to find a person, product or service to have an unblemished internet profile.
The question is under what circumstances, if any, should the courts hold search engines like google liable for merely returning results which associate words with names which could be perceived as defamatory or offensive to a person or company with which the words are associated or presented.
Individuals and companies engaged in trying to control auto-complete results are clearly attempting to control the way their name is being used, in effect making them the sole source of a license for the use of it in computer mediated communications, licensing only material which casts them in a positive light.
These cases and similar ones in Italy, Japan and France involving people and/or organisations suing Google because they are unhappy with the association of the search engine with their name has sparked debate about the appropriate legal limits on the publication of search results, freedom of speech and the liability of online intermediaries for the publication of defamatory content.
Many people would reasonably perceive any kind of interference with search results by the State as a form of censorship, making it impractical for people to exercise their rights to free speech on the internet, and resulting in an inherent positive bias to the general commentary of society.
Different countries’ courts have demonstrated a different approach to the dilemna of defamatory auto-complete searches, attributable to different laws and values. These differences lie at the heart of the debate over issues relating to freedom of speech on the one hand and the protection of individual privacy and reputational interests on the other hand.
In a recent decision in Germany a court ordered Google to remove search terms produced by it’s autocomplete function which were defamatory and constituted an invasion of privacy. The decision isn’t necessarily a precedent for other countries and should be seen in the context of the very strong protections for personal privacy and identity which exist in Germany. Germany considers reputation to be part of a person’s privacy interests.
The German Federal Supreme Court (Bundesgerichtshof, aka BGH), Case VI ZR 269/12 of 14 May 2013, ruled that Google was liable for the defamatory text the moment the search engine received notice of such defamatory content. The Court ordered Google to block defamatory words relating to a person and company from appearing in its search engine’s auto-complete function, requiring Google to manually remove the autocomplete results.
The important legal finding was that the Court held that autocomplete constitutes Google’s own content and that it can therefore not be regarded as being involved as a mere conduit, caching or hosting safe harbor. The reason Google couldn’t claim safe harbour protection was the finding by the Court that it is involved in analysing data from it’s users and then further presenting that data to them, thus the act of processing the data applying it’s algorithms transformed it into it’s own content. The Court held that it wasn’t serving the functions of a host because autocomplete isn’t performing a merely technical, automatic and passive task, an interpretation which German legal commentators believe is an erroneous interpretation of the eCommerce directive.
The Court held that Google are not required to vet all auto-complete text in advance and legally evaluate it, a task which would be clearly impractical. Therefore based on the German Court’s ruling the company only has to take action once it receives notice of defamatory auto-complete content from disaffected users.
The effect of the ruling means that individuals and organisations will be able to request deletion of auto-complete content by using a notice and take down procedure under German law. Google don’t have to remove their auto-complete search software altogether, rather it will come under an obligation to evaluate potential defamation claims brought to it’s attention.
In the German case, the action was brought by a Germany Company selling online nutritional supplements and cosmetics and it’s founder and CEO/Managing Director, known as “R.S”. The Plaintiffs sued Google Inc. for infringement of personality rights upon R.S. discovering in 2010 that a search of his name on www.google.de was auto-completed by Google with the suggested text “Scientology” and “fraud.” The plaintiff claimed that the results generated implied he was linked to fraud and scientology when he had no association with either. The publication of predictions as a result of the autocomplete function constitutes a violation of personality rights if they imply a statement that is untrue.
The plaintiffs had previously requested Google to cease and desist from providing such suggestions to the name search of the plaintiff. The plaintiffs contended there was an infringement of personality rights as there was no connection between the plaintiffs and scientology nor was there any reason to link the plaintiffs to fraudulent behaviour. Furthermore, the plaintiffs contended that none of the search results contained any link between the plaintiff and either scientology or fraud, a fact which Google disputed.
The case established that a search engine may use the autocomplete software without being obligated to check every suggested text option in advance. However once the search engine operator receives notice of suggested text options the operator must then take all necessary and reasonable measures to prevent future infringements.
In upholding the claims against Google the court found that the suggested text options “scientology” and “fraud” were named in connection with the plaintiffs and infringed their personality rights. Where the suggested text options were not true they were deemed to amount to an unjustified act against the fundamental rights of the plaintiff.
The German Court’s ruling doesn’t therefore automatically make Google liable for every suggested text option which is made by the auto-complete function, as it clearly isn’t reasonable or practicable for search engines to undertake even the most rudimentary legal analysis of predictive search. However it did establish that under German law, the moment Google receives notice of suggested text options infringing a user’s personality rights under German law, it comes under a positive duty to prevent such infringement recurring.
The Plaintiffs’ right to request Google cease and desist was based on an infringement of Section 823, 1004 German Civil Code and Articles 1, 2, Basic Law For The Federal Republic Of Germany, in summary a mixture of civil law tort and the fundamental rights of human dignity and individual personal freedom.
The ruling is likely to be of persuasive value in determining the outcome of another highly publicised pending against Google in Germany involving the wife of former German president Christian Wulff. Bettina Wulff is suing Google because the search engine added words to her name that referred to red light district and escort services. Her name is still being autocompleted with the German word for “red light.” She is suing Google over the autocomplete phrases “Bettina Wulff prostitute” and “Bettina Wulff escort”. She contends that the autocomplete function perpetuates false rumours that she was once a prostitute, tarnishing her reputation.
Rumors that Wulff had such past have been in circulation since 2006, and according to the German press are believed to have been part of a misinformation campaign involving the spreading of malicious rumours in order to damage to her husband’s political career. So far 34 cease-and-desist orders have been issued in the period of months directed at German and foreign bloggers and members of the media, some resulting in fines being imposed on bloggers for what they have written about her.
One of the problems with bringing attention to the autocomplete results and in internet defamation cases in general is that it can draw more attention to the material the person or business affected by malicious gossip on the internet. For example, many more curious searchers are typing in the words “Bettina Wolf escort” to search for information, hence the initial rumour becomes cemented as the number of internet search queries for the string of words increases.
The cases raise important issues relating to freedom of speech, censorship and privacy and the liability of online intermediaries for various forms of infringing material which appears on the internet.
The courts should not be the arbiter of values as users should be able to decide for themselves what they think and what they choose to believe based on what they locate on the internet. Where the State decides to intervene it is interfering with the right of individuals to have at their disposal an uncensored autocomplete. If online intermediaries such as Google continue to be compelled by courts to disable and otherwise interfere with their algorithms to bend to the will of every person who is offended by what the real world thinks of them, this is a slippery slope to censorship, with courts deciding whether the results users are searching for are not the ones they should see.