FACEBOOK AND GOOGLE CAUGHT CAUSING FAKE NEWS IN ORDER TO TRY TO RIG ELECTIONS

US tech groups scramble with post-election soul-searching

 

Facebook, Google and Twitter are under attack from outside and within over fake news

 

 

It is tempting to see the US presidential election as a watershed moment for the big US internet companies.

Under attack from outside and within — their deeply liberal workforces were guaranteed to be unsettled by any suggestion they played a part in the Trump victory — they have engaged in soul searching and some quick responses.

Facebook and Google, for instance, have said this week they will stop their advertising being placed on sites that carry fake news. Both companies also said they would look for ways to weed out this particular form of bad content from their sites. And Twitter closed the accounts of a clutch of so-called “alt-right” figures in a crackdown on hate speech.

This has done little to calm concerns about whether their core business models make them complicit in spreading falsehood and stoking prejudice. As long as maximising user engagement is the ultimate business metric, there will always be a potential conflict with the responsibility that comes with running mass media platforms.

Critics argue that if democracy itself is now at risk, then a more interventionist approach to managing content is needed. But this is a movie we have seen many times before, and the ending does not seem in doubt.

The rash of fake political news echoes other plagues that have swept across the online platforms before, including spam, copyright infringement and counterfeit goods. The most obvious precedent is the outpouring of material from so-called “content farms” that threatened to overwhelm Google’s search engine with low-grade articles at the start of this decade. The response in all these cases — adjusting the algorithms to weed out undesirable or infringing content — is the same one that Google and Facebook are now counting on to bar the spread of fake news.

Critics say human editors and curators are needed. This is something the internet companies have resisted, and it is hard to see them bending under the current pressure. For a start, humans are expensive. They would also be guaranteed to antagonise one group or another and bring charges of bias. That might not matter much in the case of a normal news site, but for a Facebook, with 1.8bn users who log on at least once a month, the prospect of individuals making decisions that would affect what a large segment of humanity is able to read raises chilling possibilities. Much better to make a sweeping algorithmic change designed to excise a class of abusive content.

But while this sounds straightforward in principle, it cannot solve all the problems. If internet users are predisposed to believe false information that confirms their prejudices — and if they enthusiastically take part in spreading conspiracy theories — then falsehoods may be endemic to mass online communication platforms.

This issue is more difficult for Facebook, whose algorithms rely heavily on a social signal that comes from what a user’s friends are sharing. If its users promote unreliable information — particularly if it is not easily categorised as “news” — it will spread rapidly.

Ironically, it was only in June this year — just as the presidential election was entering its most important phase — that Facebook decided to de-emphasise news and give more prominence to personal material in its users’ newsfeeds. One possibility this raises is that relegating “real” news about the election left a vacuum that was filled by more dubious information — a sort of information age Gresham’s Law in which false news drives out good.

All of this will fuel self-examination for months to come and launch a thousand social science studies into how free and open communication platforms promote tribalism. But it is hardly likely to dent one of the tech world’s best business models: running mass communication platforms that do not, in the last resort, take a stance on the information that billions of people want to access or share on their systems.

Fake news: First Facebook, now Google